U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.70(1); 2017 Feb

Understanding one-way ANOVA using conceptual figures

Tae kyun kim.

Department of Anesthesia and Pain Medicine, Pusan National University Yangsan Hospital and School of Medicine, Yangsan, Korea.

Analysis of variance (ANOVA) is one of the most frequently used statistical methods in medical research. The need for ANOVA arises from the error of alpha level inflation, which increases Type 1 error probability (false positive) and is caused by multiple comparisons. ANOVA uses the statistic F, which is the ratio of between and within group variances. The main interest of analysis is focused on the differences of group means; however, ANOVA focuses on the difference of variances. The illustrated figures would serve as a suitable guide to understand how ANOVA determines the mean difference problems by using between and within group variance differences.

Introduction

The differences in the means of two groups that are mutually independent and satisfy both the normality and equal variance assumptions can be obtained by comparing them using a Student's t-test. However, we may have to determine whether differences exist in the means of 3 or more groups. Most readers are already aware of the fact that the most common analytical method for this is the one-way analysis of variance (ANOVA). The present article aims to examine the necessity of using a one-way ANOVA instead of simply repeating the comparisons using Student's t-test. ANOVA literally means analysis of variance, and the present article aims to use a conceptual illustration to explain how the difference in means can be explained by comparing the variances rather by the means themselves.

Significance Level Inflation

In the comparison of the means of three groups that are mutually independent and satisfy the normality and equal variance assumptions, when each group is paired with another to attempt three paired comparisons 1) , the increase in Type I error becomes a common occurrence. In other words, even though the null hypothesis is true, the probability of rejecting it increases, whereby the probability of concluding that the alternative hypothesis (research hypothesis) has significance increases, despite the fact that it has no significance.

Let us assume that the distribution of differences in the means of two groups is as shown in Fig. 1 . The maximum allowable error range that can claim “differences in means exist” can be defined as the significance level (α). This is the maximum probability of Type I error that can reject the null hypothesis of “differences in means do not exist” in the comparison between two mutually independent groups obtained from one experiment. When the null hypothesis is true, the probability of accepting it becomes 1-α.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g001.jpg

Now, let us compare the means of three groups. Often, the null hypothesis in the comparison of three groups would be “the population means of three groups are all the same,” however, the alternative hypothesis is not “the population means of three groups are all different,” but rather, it is “at least one of the population means of three groups is different.” In other words, the null hypothesis (H 0 ) and the alternative hypothesis (H 1 ) are as follows:

Therefore, among the three groups, if the means of any two groups are different from each other, the null hypothesis can be rejected.

In that case, let us examine whether the probability of rejecting the entire null hypothesis remains consistent, when two continuous comparisons are made on hypotheses that are not mutually independent. When the null hypothesis is true, if the null hypothesis is rejected from a single comparison, then the entire null hypothesis can be rejected. Accordingly, the probability of rejecting the entire null hypothesis from two comparisons can be derived by firstly calculating the probability of accepting the null hypothesis from two comparisons, and then subtracting that value from 1. Therefore, the probability of rejecting the entire null hypothesis from two comparisons is as follows:

If the comparisons are made n times, the probability of rejecting the entire null hypothesis can be expressed as follows:

It can be seen that as the number of comparisons increases, the probability of rejecting the entire null hypothesis also increases. Assuming the significance level for a single comparison to be 0.05, the increases in the probability of rejecting the entire null hypothesis according to the number of comparisons are shown in Table 1 .

ANOVA Table

Although various methods have been used to avoid the hypothesis testing error due to significance level inflation, such as adjusting the significance level by the number of comparisons, the ideal method for resolving this problem as a single statistic is the use of ANOVA. ANOVA is an acronym for analysis of variance, and as the name itself implies, it is variance analysis. Let us examine the reason why the differences in means can be explained by analyzing the variances, despite the fact that the core of the problem that we want to figure out lies with the comparisons of means.

For example, let us examine whether there are differences in the height of students according to their grades ( Table 2 ). First, let us examine the ANOVA table ( Table 3 ) that is commonly obtained as a product of ANOVA. In Table 3 , the significance is ultimately determined using a significance probability value (P value), and in order to obtain this value, the statistic and its position in the distribution to which it belongs, must be known. In other words, there has to be a distribution that serves as the reference and that distribution is called F distribution. This F comes from the name of the statistician Ronald Fisher . The ANOVA test is also referred to as the F test, and F distribution is a distribution formed by the variance ratios. Accordingly, F statistic is expressed as a variance ratio, as shown below.

Raw data of students' heights in three different classes. Each class consists of thirty students.

Ȳ i is the mean of the group i; n i is the number of observations of the group i; Ȳ is the overall mean; K is the number of groups; Y ij is the j th observational value of group i; and N is the number of all observational values. The F statistic is the ratio of intergroup mean sum of squares to intragroup mean sum of squares.

Here, Ȳ i is the mean of the group i; n i is the number of observations of the group i; Ȳ is the overall mean; K is the number of groups; Y ij is the j th observational value of group i; and N is the number of all observational values.

It is not easy to look at this complex equation and understand ANOVA at a single glance. The meaning of this equation will be explained as an illustration for easier understanding. Statistics can be regarded as a study field that attempts to express data which are difficult to understand with an easy and simple ways so that they can be represented in a brief and simple forms. What that means is, instead of independently observing the groups of scattered points, as shown in Fig. 2A , the explanation could be given with the points lumped together as a single representative value. Values that are commonly referred to as the mean, median, and mode can be used as the representative value. Here, let us assume that the black rectangle in the middle represents the overall mean. However, a closer look shows that the points inside the circle have different shapes and the points with the same shape appear to be gathered together. Therefore, explaining all the points with just the overall mean would be inappropriate, and the points would be divided into groups in such a way that the same shapes belong to the same group. Although it is more cumbersome than explaining the entire population with just the overall mean, it is more reasonable to first form groups of points with the same shape and establish the mean for each group, and then explain the population with the three means. Therefore, as shown in Fig. 2B , the groups were divided into three and the mean was established in the center of each group in an effort to explain the entire population with these three points. Now the question arises as to how can one evaluate whether there are differences in explaining with the representative value of the three groups (e.g.; mean) versus explaining with lumping them together as a single overall mean.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g002.jpg

First, let us measure the distance between the overall mean and the mean of each group, and the distance from the mean of each group to each data within that group. The distance between the overall mean and the mean of each group was expressed as a solid arrow line ( Fig. 2C ). This distance is expressed as (Ȳ i − Ȳ) 2 , which appears in the denominator of the equation for calculating the F statistic. Here, the number of data in each group are multiplied, n i (Ȳ i − Ȳ) 2 . This is because explaining with the representative value of a single group is the same as considering that all the data in that group are accumulated at the representative value. Therefore, the amount of variance which is induced by explaining with the points divided into groups can be seen, as compared to explaining with the overall mean, and this explains inter-group variance.

Let us return to the equation for deriving the F statistic. The meaning of ( Y ij − Ȳ i ) 2 in the numerator is represented as an illustration in Fig. 2C , and the distance from the mean of each group to each data is shown by the dotted line arrows. In the figure, this distance represents the distance from the mean within the group to the data within that group, which explains the intragroup variance.

By looking at the equation for F statistic, it can be seen that this inter- or intragroup variance was divided into inter- and intragroup freedom. Let us assume that when all the fingers are stretched out, the mean value of the finger length is represented by the index finger. If the differences in finger lengths are compared to find the variance, then it can be seen that although there are 5 fingers, the number of gaps between the fingers is 4. To derive the mean variance, the intergroup variance was divided by freedom of 2, while the intragroup variance was divided by the freedom of 87, which was the overall number obtained by subtracting 1 from each group.

What can be understood by deriving the variance can be described in this manner. In Figs. 3A and 3B , the explanations are given with two different examples. Although the data were divided into three groups, there may be cases in which the intragroup variance is too big ( Fig. 3A ), so it appears that nothing is gained by dividing into three groups, since the boundaries become ambiguous and the group mean is not far from the overall mean. It seems that it would have been more efficient to explain the entire population with the overall mean. Alternatively, when the intergroup variance is relatively larger than the intragroup variance, in other word, when the distance from the overall mean to the mean of each group is far ( Fig. 3B ), the boundaries between the groups become more clear, and explaining by dividing into three group appears more logical than lumping together as the overall mean.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g003.jpg

Ultimately, the positions of statistic derived in this manner from the inter- and intragroup variance ratios can be identified from the F distribution ( Fig. 4 ). When the statistic 3.629 in the ANOVA table is positioned more to the right than 3.101, which is a value corresponding to the significance level of 0.05 in the F distribution with freedoms of 2 and 87, meaning bigger than 3.101, the null hypothesis can be rejected.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g004.jpg

Post-hoc Test

Anyone who has performed ANOVA has heard of the term post-hoc test. It refers to “the analysis after the fact” and it is derived from the Latin word for “after that.” The reason for performing a post-hoc test is that the conclusions that can be derived from the ANOVA test have limitations. In other words, when the null hypothesis that says the population means of three mutually independent groups are the same is rejected, the information that can be obtained is not that the three groups are different from each other. It only provides information that the means of the three groups may differ and at least one group may show a difference. This means that it does not provide information on which group differs from which other group ( Fig. 5 ). As a result, the comparisons are made with different pairings of groups, undergoing an additional process of verifying which group differs from which other group. This process is referred to as the post-hoc test.

An external file that holds a picture, illustration, etc.
Object name is kjae-70-22-g005.jpg

The significance level is adjusted by various methods [ 1 ], such as dividing the significance level by the number of comparisons made. Depending on the adjustment method, various post-hoc tests can be conducted. Whichever method is used, there would be no major problems, as long as that method is clearly described. One of the most well-known methods is the Bonferroni's correction. To explain this briefly, the significance level is divided by the number of comparisons and applied to the comparisons of each group. For example, when comparing the population means of three mutually independent groups A, B, and C, if the significance level is 0.05, then the significance level used for comparisons of groups A and B, groups A and C, and groups B and C would be 0.05/3 = 0.017. Other methods include Turkey, Schéffe, and Holm methods, all of which are applicable only when the equal variance assumption is satisfied; however, when this assumption is not satisfied, then Games Howell method can be applied. These post-hoc tests could produce different results, and therefore, it would be good to prepare at least 3 post-hoc tests prior to carrying out the actual study. Among the different types of post-hoc tests it is recommended that results which appear the most frequent should be used to interpret the differences in the population means.

Conclusions

It is believed that a wide variety of approaches and explanatory methods are available for explaining ANOVA. However, illustrations in this manuscript were presented as a tool for providing an understanding to those who are dealing with statistics for the first time. As the author who reproduced ANOVA is a non-statistician, there may be some errors in the illustrations. However, it should be sufficient for understanding ANOVA at a single glance and grasping its basic concept.

ANOVA also falls under the category of parametric analysis methods which perform the analysis after defining the distribution of the recruitment population in advance. Therefore, normality, independence, and equal variance of the samples must be satisfied for ANOVA. The processes of verification on whether the samples were extracted independently from each other, Levene's test for determining whether homogeneity of variance was satisfied, and Shapiro-Wilk or Kolmogorov test for determining whether normality was satisfied must be conducted prior to deriving the results [ 2 , 3 , 4 ].

1) A, B, C three paired comparisons: A vs B, A vs C and B vs C.

Accessibility Links

  • Skip to content
  • Skip to search IOPscience
  • Skip to Journals list
  • Accessibility help
  • Accessibility Help

Click here to close this panel.

Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.

Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.

We are proudly declaring that science is our only shareholder.

Application of one-way ANOVA in completely randomized experiments

Zaharah Wahid 1 , Ahmad Izwan Latiff 2 and Kartini Ahmad 1

Published under licence by IOP Publishing Ltd Journal of Physics: Conference Series , Volume 949 , 4th International Conference on Mathematical Applications in Engineering 2017 (ICMAE'17) 8–9 August 2017, International Islamic University Malaysia, Kuala Lumpur, Malaysia Citation Zaharah Wahid et al 2017 J. Phys.: Conf. Ser. 949 012017 DOI 10.1088/1742-6596/949/1/012017

Article metrics

4553 Total downloads

Share this article

Author e-mails.

[email protected]

Author affiliations

1 Department of Science in Engineering, Faculty of Engineering, International Islamic University Malaysia, P.O. Box 10, Jalan Gombak, 53100, Kuala Lumpur, Malaysia

2 Department of Biotechnology in Engineering Faculty of Engineering, International Islamic University Malaysia., P.O. Box 10, Jalan Gombak, 53100, Kuala Lumpur, Malaysia

Buy this article in print

This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index and location on-sites. Two different approaches are employed for the analyses; critical value and p-value. It also presents key assumptions of the technique to be satisfied by the data in order to obtain valid results. Pairwise comparisons by Turkey method are also considered and discussed to determine where the significant differences among the means is after the ANOVA has been performed. The results revealed that there are statistically significant relationship exist between the chemical oxygen demand index and the location on-sites.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Statology

Statistics Made Easy

One-Way ANOVA: Definition, Formula, and Example

A one-way ANOVA  (“analysis of variance”) compares the means of three or more independent groups to determine if there is a statistically significant difference between the corresponding population means.

This tutorial explains the following:

  • The motivation for performing a one-way ANOVA.
  • The assumptions that should be met to perform a one-way ANOVA.
  • The process to perform a one-way ANOVA.
  • An example of how to perform a one-way ANOVA.

One-Way ANOVA: Motivation

Suppose we want to know whether or not three different exam prep programs lead to different mean scores on a college entrance exam. Since there are millions of high school students around the country, it would be too time-consuming and costly to go around to each student and let them use one of the exam prep programs.

Instead, we might select three  random samples  of 100 students from the population and allow each sample to use one of the three test prep programs to prepare for the exam. Then, we could record the scores for each student once they take the exam.

Selecting samples from a population

However, it’s virtually guaranteed that the mean exam score between the three samples will be at least a little different.  The question is whether or not this difference is statistically significant . Fortunately, a one-way ANOVA allows us to answer this question.

One-Way ANOVA: Assumptions

For the results of a one-way ANOVA to be valid, the following assumptions should be met:

1. Normality  – Each sample was drawn from a normally distributed population.

2. Equal Variances  – The variances of the populations that the samples come from are equal. You can use Bartlett’s Test to verify this assumption.

3. Independence  – The observations in each group are independent of each other and the observations within groups were obtained by a random sample.

Read this article for in-depth details on how to check these assumptions.

One-Way ANOVA: The Process

A one-way ANOVA uses the following null and alternative hypotheses:

  • H 0 (null hypothesis):  μ 1  = μ 2  = μ 3  = … = μ k  (all the population means are equal)
  • H 1  (alternative hypothesis):  at least one population mean is different   from the rest

You will typically use some statistical software (such as R, Excel, Stata, SPSS, etc.) to perform a one-way ANOVA since it’s cumbersome to perform by hand.

No matter which software you use, you will receive the following table as output:

  • SSR: regression sum of squares
  • SSE: error sum of squares
  • SST: total sum of squares (SST = SSR + SSE)
  • df r : regression degrees of freedom (df r  = k-1)
  • df e : error degrees of freedom (df e  = n-k)
  • k:  total number of groups
  • n:  total observations
  • MSR:  regression mean square (MSR = SSR/df r )
  • MSE: error mean square (MSE = SSE/df e )
  • F:  The F test statistic (F = MSR/MSE)
  • p:  The p-value that corresponds to F dfr, dfe

If the p-value is less than your chosen significance level (e.g. 0.05), then you can reject the null hypothesis and conclude that at least one of the population means is different from the others.

Note: If you reject the null hypothesis, this indicates that at least one of the population means is different from the others, but the ANOVA table doesn’t specify which  population means are different. To determine this, you need to perform post hoc tests , also known as “multiple comparisons” tests.

One-Way ANOVA: Example

Suppose we want to know whether or not three different exam prep programs lead to different mean scores on a certain exam. To test this, we recruit 30 students to participate in a study and split them into three groups.

The students in each group are randomly assigned to use one of the three exam prep programs for the next three weeks to prepare for an exam. At the end of the three weeks, all of the students take the same exam. 

The exam scores for each group are shown below:

Example one-way ANOVA data

To perform a one-way ANOVA on this data, we will use the Statology One-Way ANOVA Calculator with the following input:

One-way ANOVA calculation example

From the output table we see that the F test statistic is  2.358  and the corresponding p-value is  0.11385 .

ANOVA output table interpretation

Since this p-value is not less than 0.05, we fail to reject the null hypothesis.

This means  we don’t have sufficient evidence to say that there is a statistically significant difference between the mean exam scores of the three groups.

Additional Resources

The following articles explain how to perform a one-way ANOVA using different statistical softwares:

How to Perform a One-Way ANOVA in Excel How to Perform a One-Way ANOVA in R How to Perform a One-Way ANOVA in Python How to Perform a One-Way ANOVA in SAS How to Perform a One-Way ANOVA in SPSS How to Perform a One-Way ANOVA in Stata How to Perform a One-Way ANOVA on a TI-84 Calculator Online One-Way ANOVA Calculator

Featured Posts

7 Common Beginner Stats Mistakes and How to Avoid Them

Hey there. My name is Zach Bobbitt. I have a Masters of Science degree in Applied Statistics and I’ve worked on machine learning algorithms for professional businesses in both healthcare and retail. I’m passionate about statistics, machine learning, and data visualization and I created Statology to be a resource for both students and teachers alike.  My goal with this site is to help you learn statistics through using simple terms, plenty of real-world examples, and helpful illustrations.

2 Replies to “One-Way ANOVA: Definition, Formula, and Example”

Zach how do I cite something from Statology? I am using it for my dissertation.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Join the Statology Community

Sign up to receive Statology's exclusive study resource: 100 practice problems with step-by-step solutions. Plus, get our latest insights, tutorials, and data analysis tips straight to your inbox!

By subscribing you accept Statology's Privacy Policy.

The one-way ANOVA test explained

Affiliation.

  • 1 University of Limerick, Limerick, Republic of Ireland.
  • PMID: 37317616
  • DOI: 10.7748/nr.2023.e1885

Background: Quantitative methods and statistical analysis are essential tools in nursing research, as they support researchers testing phenomena, illustrate their findings clearly and accurately, and provide explanation or generalisation of the phenomenon being investigated. The most popular inferential statistics test is the one-way analysis of variance (ANOVA), as it is the test designated for comparing the means of a study's target groups to identify if they are statistically different to the others. However, the nursing literature has identified that statistical tests are not being used correctly and findings are being reported incorrectly.

Aim: To present and explain the one-way ANOVA.

Discussion: The article presents the purpose of inferential statistics and explains one-way ANOVA. It uses relevant examples to examine the steps needed to successfully apply the one-way ANOVA. The authors also provide recommendations for other statistical tests and measurements in parallel to one-way ANOVA.

Conclusion: Nurses need to develop their understanding and knowledge of statistical methods, to engage in research and evidence-based practice.

Implications for practice: This article enhances the understanding and application of one-way ANOVAs by nursing students, novice researchers, nurses and those engaged in academic studies. Nurses, nursing students and nurse researchers need to familiarise themselves with statistical terminology and develop their understanding of statistical concepts, to support evidence-based, quality, safe care.

Keywords: data analysis; quantitative research; research; study design.

©2023 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  • Analysis of Variance
  • Correlation of Data
  • Nursing Research*
  • Research Design
  • Students, Nursing*

Teach yourself statistics

One-Way Analysis of Variance: Example

In this lesson, we apply one-way analysis of variance to some fictitious data, and we show how to interpret the results of our analysis.

Note: Computations for analysis of variance are usually handled by a software package. For this example, however, we will do the computations "manually", since the gory details have educational value.

Problem Statement

A pharmaceutical company conducts an experiment to test the effect of a new cholesterol medication. The company selects 15 subjects randomly from a larger population. Each subject is randomly assigned to one of three treatment groups. Within each treament group, subjects receive a different dose of the new medication. In Group 1, subjects receive 0 mg/day; in Group 2, 50 mg/day; and in Group 3, 100 mg/day.

The treatment levels represent all the levels of interest to the experimenter, so this experiment used a fixed-effects model to select treatment levels for study.

After 30 days, doctors measure the cholesterol level of each subject. The results for all 15 subjects appear in the table below:

In conducting this experiment, the experimenter had two research questions:

  • Does dosage level have a significant effect on cholesterol level?
  • How strong is the effect of dosage level on cholesterol level?

To answer these questions, the experimenter intends to use one-way analysis of variance.

Is One-Way ANOVA the Right Technique?

Before you crunch the first number in one-way analysis of variance, you must be sure that one-way analysis of variance is the correct technique. That means you need to ask two questions:

  • Is the experimental design compatible with one-way analysis of variance?
  • Does the data set satisfy the critical assumptions required for one-way analysis of variance?

Let's address both of those questions.

Experimental Design

As we discussed in the previous lesson (see One-Way Analysis of Variance: Fixed Effects ), one-way analysis of variance is only appropriate with one experimental design - a completely randomized design. That is exactly the design used in our cholesterol study, so we can check the experimental design box.

Critical Assumptions

We also learned in the previous lesson that one-way analysis of variance makes three critical assumptions:

  • Independence . The dependent variable score for each experimental unit is independent of the score for any other unit.
  • Normality . In the population, dependent variable scores are normally distributed within treatment groups.
  • Equality of variance . In the population, the variance of dependent variable scores in each treatment group is equal. (Equality of variance is also known as homogeneity of variance or homoscedasticity.)

Therefore, for the cholesterol study, we need to make sure our data set is consistent with the critical assumptions.

Independence of Scores

The assumption of independence is the most important assumption. When that assumption is violated, the resulting statistical tests can be misleading.

The independence assumption is satisfied by the design of the study, which features random selection of subjects and random assignment to treatment groups. Randomization tends to distribute effects of extraneous variables evenly across groups.

Normal Distributions in Groups

Violations of normality can be a problem when sample size is small, as it is in this cholesterol study. Therefore, it is important to be on the lookout for any indication of non-normality.

There are many different ways to check for normality. On this website, we describe three at: How to Test for Normality: Three Simple Tests . Given the small sample size, our best option for testing normality is to look at the following descriptive statistics:

  • Central tendency. The mean and the median are summary measures used to describe central tendency - the most "typical" value in a set of values. With a normal distribution, the mean is equal to the median.
  • Skewness. Skewness is a measure of the asymmetry of a probability distribution. If observations are equally distributed around the mean, the skewness value is zero; otherwise, the skewness value is positive or negative. As a rule of thumb, skewness between -2 and +2 is consistent with a normal distribution.
  • Kurtosis. Kurtosis is a measure of whether observations cluster around the mean of the distribution or in the tails of the distribution. The normal distribution has a kurtosis value of zero. As a rule of thumb, kurtosis between -2 and +2 is consistent with a normal distribution.

The table below shows the mean, median, skewness, and kurtosis for each group from our study.

In all three groups, the difference between the mean and median looks small (relative to the range ). And skewness and kurtosis measures are consistent with a normal distribution (i.e., between -2 and +2). These are crude tests, but they provide some confidence for the assumption of normality in each group.

Note: With Excel, you can easily compute the descriptive statistics in Table 1. To see how, go to: How to Test for Normality: Example 1 .

Homogeneity of Variance

When the normality of variance assumption is satisfied, you can use Hartley's Fmax test to test for homogeneity of variance. Here's how to implement the test:

where X i, j is the score for observation i in Group j , X j is the mean of Group j , and n j is the number of observations in Group j .

Here is the variance ( s 2 j ) for each group in the cholesterol study.

F RATIO = s 2 MAX / s 2 MIN

F RATIO = 1170 / 450

F RATIO = 2.6

where s 2 MAX is the largest group variance, and s 2 MIN is the smallest group variance.

where n is the largest sample size in any group.

Note: The critical F values in the table are based on a significance level of 0.05.

Here, the F ratio (2.6) is smaller than the Fmax value (15.5), so we conclude that the variances are homogeneous.

Note: Other tests, such as Bartlett's test , can also test for homogeneity of variance. For the record, Bartlett's test yields the same conclusion for the cholesterol study; namely, the variances are homogeneous.

Analysis of Variance

Having confirmed that the critical assumptions are tenable, we can proceed with a one-way analysis of variance. That means taking the following steps:

  • Specify a mathematical model to describe the causal factors that affect the dependent variable.
  • Write statistical hypotheses to be tested by experimental data.
  • Specify a significance level for a hypothesis test.
  • Compute the grand mean and the mean scores for each group.
  • Compute sums of squares for each effect in the model.
  • Find the degrees of freedom associated with each effect in the model.
  • Based on sums of squares and degrees of freedom, compute mean squares for each effect in the model.
  • Compute a test statistic , based on observed mean squares and their expected values.
  • Find the P value for the test statistic.
  • Accept or reject the null hypothesis , based on the P value and the significance level.
  • Assess the magnitude of the effect of the independent variable, based on sums of squares.

Now, let's execute each step, one-by-one, with our cholesterol medication experiment.

Mathematical Model

For every experimental design, there is a mathematical model that accounts for all of the independent and extraneous variables that affect the dependent variable. In our experiment, the dependent variable ( X ) is the cholesterol level of a subject, and the independent variable ( β ) is the dosage level administered to a subject.

For example, here is the fixed-effects model for a completely randomized design:

X i j = μ + β j + ε i ( j )

where X i j is the cholesterol level for subject i in treatment group j , μ is the population mean, β j is the effect of the dosage level administered to subjects in group j ; and ε i ( j ) is the effect of all other extraneous variables on subject i in treatment j .

Statistical Hypotheses

For fixed-effects models, it is common practice to write statistical hypotheses in terms of the treatment effect β j . With that in mind, here is the null hypothesis and the alternative hypothesis for a one-way analysis of variance:

H 0 : β j = 0 for all j

H 1 : β j ≠ 0 for some j

If the null hypothesis is true, the mean score (i.e., mean cholesterol level) in each treatment group should equal the population mean. Thus, if the null hypothesis is true, mean scores in the k treatment groups should be equal. If the null hypothesis is false, at least one pair of mean scores should be unequal.

Significance Level

The significance level (also known as alpha or α) is the probability of rejecting the null hypothesis when it is actually true. The significance level for an experiment is specified by the experimenter, before data collection begins.

Experimenters often choose significance levels of 0.05 or 0.01. For this experiment, let's use a significance level of 0.05.

Mean Scores

Analysis of variance begins by computing a grand mean and group means:

X  = ( 1 / 15 ) * ( 210 + 210 + ... + 270 + 240 )

  • Group means. The mean of group j ( X j ) is the mean of all observations in group j , computed as follows:

X  1  = 258

X  2  = 246

X  3  = 210

In the equations above, n is the total sample size across all groups; and n  j is the sample size in Group j  .

Sums of Squares

A sum of squares is the sum of squared deviations from a mean score. One-way analysis of variance makes use of three sums of squares:

SSB = 5 * [ ( 238-258 ) 2 + ( 238-246) 2 + ( 238-210 ) 2 ]

SSW = 2304 + ... + 900 = 9000

  • Total sum of squares. The total sum of squares (SST) measures variation of all scores around the grand mean. It can be computed from the following formula: SST = k Σ j=1 n j Σ i=1 ( X  i j  -  X  ) 2

SST = 784 + 4 + 1084 + ... + 784 + 784 + 4

SST = 15,240

It turns out that the total sum of squares is equal to the between-groups sum of squares plus the within-groups sum of squares, as shown below:

SST = SSB + SSW

15,240 = 6240 + 9000

Degrees of Freedom

The term degrees of freedom (df) refers to the number of independent sample points used to compute a statistic minus the number of parameters estimated from the sample points.

To illustrate what is going on, let's find the degrees of freedom associated with the various sum of squares computations:

Here, the formula uses k independent sample points, the sample means X   j  . And it uses one parameter estimate, the grand mean X , which was estimated from the sample points. So, the between-groups sum of squares has k - 1 degrees of freedom ( df BG  ).

df BG = k - 1 = 5 - 1 = 4

Here, the formula uses n independent sample points, the individual subject scores X  i j  . And it uses k parameter estimates, the group means X   j  , which were estimated from the sample points. So, the within-groups sum of squares has n - k degrees of freedom ( df WG  ).

n = Σ n i = 5 + 5 + 5 = 15

df WG = n - k = 15 - 3 = 12

Here, the formula uses n independent sample points, the individual subject scores X  i j  . And it uses one parameter estimate, the grand mean X , which was estimated from the sample points. So, the total sum of squares has n  - 1 degrees of freedom ( df TOT  ).

df TOT  = n - 1 = 15 - 1 = 14

The degrees of freedom for each sum of squares are summarized in the table below:

Mean Squares

A mean square is an estimate of population variance. It is computed by dividing a sum of squares (SS) by its corresponding degrees of freedom (df), as shown below:

MS = SS / df

To conduct a one-way analysis of variance, we are interested in two mean squares:

MS WG = SSW / df WG

MS WG = 9000 / 12 = 750

MS BG = SSB / df BG

MS BG = 6240 / 2 = 3120

Expected Value

The expected value of a mean square is the average value of the mean square over a large number of experiments.

Statisticians have derived formulas for the expected value of the within-groups mean square ( MS WG  ) and for the expected value of the between-groups mean square ( MS BG  ). For one-way analysis of variance, the expected value formulas are:

Fixed- and Random-Effects:

E( MS WG  ) = σ ε 2

Fixed-Effects:

Random-effects:.

E( MS BG  ) = σ ε 2 + nσ β 2

In the equations above, E( MS WG  ) is the expected value of the within-groups mean square; E( MS BG  ) is the expected value of the between-groups mean square; n is total sample size; k is the number of treatment groups; β  j is the treatment effect in Group j ; σ ε 2 is the variance attributable to everything except the treatment effect (i.e., all the extraneous variables); and σ β 2 is the variance due to random selection of treatment levels.

Notice that MS BG should equal MS WG when the variation due to treatment effects ( β  j for fixed effects and σ β 2 for random effects) is zero (i.e., when the independent variable does not affect the dependent variable). And MS BG should be bigger than the MS WG when the variation due to treatment effects is not zero (i.e., when the independent variable does affect the dependent variable)

Conclusion: By examining the relative size of the mean squares, we can make a judgment about whether an independent variable affects a dependent variable.

Test Statistic

Suppose we use the mean squares to define a test statistic F as follows:

F(v 1 , v 2 ) = MS BG / MS WG

F(2, 12) = 3120 / 750 = 4.16

where MS BG is the between-groups mean square, MS WG is the within-groups mean square, v 1 is the degrees of freedom for MS BG , and v 2 is the degrees of freedom for MS WG .

Defined in this way, the F ratio measures the size of MS BG relative to MS WG . The F ratio is a convenient measure that we can use to test the null hypothesis. Here's how:

  • When the F ratio is close to one, MS BG is approximately equal to MS WG . This indicates that the independent variable did not affect the dependent variable, so we cannot reject the null hypothesis.
  • When the F ratio is significantly greater than one, MS BG is bigger than MS WG . This indicates that the independent variable did affect the dependent variable, so we must reject the null hypothesis.

What does it mean for the F ratio to be significantly greater than one? To answer that question, we need to talk about the P-value.

In an experiment, a P-value is the probability of obtaining a result more extreme than the observed experimental outcome, assuming the null hypothesis is true.

With analysis of variance, the F ratio is the observed experimental outcome that we are interested in. So, the P-value would be the probability that an F statistic would be more extreme (i.e., bigger) than the actual F ratio computed from experimental data.

We can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than the actual F ratio observed in the experiment. Enter the between-groups degrees of freedom (2), the within-groups degrees of freedom (12), and the observed F ratio (4.16) into the calculator; then, click the Calculate button.

From the calculator, we see that the P ( F > 4.16 ) equals about 0.04. Therefore, the P-Value is 0.04.

Hypothesis Test

Recall that we specified a significance level 0.05 for this experiment. Once you know the significance level and the P-value, the hypothesis test is routine. Here's the decision rule for accepting or rejecting the null hypothesis:

  • If the P-value is bigger than the significance level, accept the null hypothesis.
  • If the P-value is equal to or smaller than the significance level, reject the null hypothesis.

Since the P-value (0.04) in our experiment is smaller than the significance level (0.05), we reject the null hypothesis that drug dosage had no effect on cholesterol level. And we conclude that the mean cholesterol level in at least one treatment group differed significantly from the mean cholesterol level in another group.

Magnitude of Effect

The hypothesis test tells us whether the independent variable in our experiment has a statistically significant effect on the dependent variable, but it does not address the magnitude of the effect. Here's the issue:

  • When the sample size is large, you may find that even small differences in treatment means are statistically significant.
  • When the sample size is small, you may find that even big differences in treatment means are not statistically significant.

With this in mind, it is customary to supplement analysis of variance with an appropriate measure of effect size. Eta squared (η 2 ) is one such measure. Eta squared is the proportion of variance in the dependent variable that is explained by a treatment effect. The eta squared formula for one-way analysis of variance is:

η 2 = SSB / SST

where SSB is the between-groups sum of squares and SST is the total sum of squares.

Given this formula, we can compute eta squared for this drug dosage experiment, as shown below:

η 2 = SSB / SST = 6240 / 15240 = 0.41

Thus, 41 percent of the variance in our dependent variable (cholesterol level) can be explained by variation in our independent variable (dosage level). It appears that the relationship between dosage level and cholesterol level is significant not only in a statistical sense; it is significant in a practical sense as well.

ANOVA Summary Table

It is traditional to summarize ANOVA results in an analysis of variance table. The analysis that we just conducted provides all of the information that we need to produce the following ANOVA summary table:

Analysis of Variance Table

This ANOVA table allows any researcher to interpret the results of the experiment, at a glance.

The P-value (shown in the last column of the ANOVA table) is the probability that an F statistic would be more extreme (bigger) than the F ratio shown in the table, assuming the null hypothesis is true. When the P-value is bigger than the significance level, we accept the null hypothesis; when it is smaller, we reject it. Here, the P-value (0.04) is smaller than the significance level (0.05), so we reject the null hypothesis.

To assess the strength of the treatment effect, an experimenter might compute eta squared (η 2 ). The computation is easy, using sum of squares entries from the ANOVA table, as shown below:

η 2 = SSB / SST = 6,240 / 15,240 = 0.41

For this experiment, an eta squared of 0.41 means that 41% of the variance in the dependent variable can be explained by the effect of the independent variable.

An Easier Option

In this lesson, we showed all of the hand calculations for a one-way analysis of variance. In the real world, researchers seldom conduct analysis of variance by hand. They use statistical software. In the next lesson, we'll analyze data from this problem with Excel. Hopefully, we'll get the same result.

13.1 One-Way ANOVA

The purpose of a one-way ANOVA test is to determine the existence of a statistically significant difference among several group means. The test uses variances to help determine if the means are equal or not. To perform a one-way ANOVA test, there are five basic assumptions to be fulfilled:

  • Each population from which a sample is taken is assumed to be normal.
  • All samples are randomly selected and independent.
  • The populations are assumed to have equal standard deviations (or variances).
  • The factor is a categorical variable.
  • The response is a numerical variable.

The Null and Alternative Hypotheses

The null hypothesis is that all the group population means are the same. The alternative hypothesis is that at least one pair of means is different. For example, if there are k groups

H 0 : μ 1 = μ 2 = μ 3 = ... = μ k

H a : At least two of the group means μ 1 , μ 2 , μ 3 , ..., μ k are not equal. That is, μ i ≠ μ j for some i ≠ j .

The graphs, a set of box plots representing the distribution of values with the group means indicated by a horizontal line through the box, help in the understanding of the hypothesis test. In the first graph (red box plots), H 0 : μ 1 = μ 2 = μ 3 and the three populations have the same distribution if the null hypothesis is true. The variance of the combined data is approximately the same as the variance of each of the populations.

If the null hypothesis is false, then the variance of the combined data is larger, which is caused by the different means as shown in the second graph (green box plots).

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-statistics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/statistics/pages/1-introduction
  • Authors: Barbara Illowsky, Susan Dean
  • Publisher/website: OpenStax
  • Book title: Statistics
  • Publication date: Mar 27, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/statistics/pages/1-introduction
  • Section URL: https://openstax.org/books/statistics/pages/13-1-one-way-anova

© Jan 23, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

One-Way Anova

Cite this chapter.

one way anova research paper

  • Amanda Ross 3 &
  • Victor L. Willson 4  

2828 Accesses

27 Citations

A one-way ANOVA (analysis of variance) compares the means of two or more groups for one dependent variable. A one-way ANOVA is required when the study includes more than two groups. (In other words, a t -test cannot be used.) As with t -tests, there is one independent variable and one dependent variable. Interval dependent variables for nominal groups are required. The assumption of normal distribution is not required.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Author information

Authors and affiliations.

A. A. Ross Consulting and Research, USA

Amanda Ross

Texas A&M University, USA

Victor L. Willson

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Sense Publishers

About this chapter

Ross, A., Willson, V.L. (2017). One-Way Anova. In: Basic and Advanced Statistical Tests. SensePublishers, Rotterdam. https://doi.org/10.1007/978-94-6351-086-8_5

Download citation

DOI : https://doi.org/10.1007/978-94-6351-086-8_5

Publisher Name : SensePublishers, Rotterdam

Online ISBN : 978-94-6351-086-8

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

LEARN STATISTICS EASILY

LEARN STATISTICS EASILY

Learn Data Analysis Now!

LEARN STATISTICS EASILY LOGO 2

How to Report One-Way ANOVA Results in APA Style: A Step-by-Step Guide

You will learn how to report: results of ANOVA, including F-statistic, degrees of freedom, and effect size.

  • One-way ANOVA identifies significant differences between three or more groups’ means.
  • A p-value < 0.05 indicates statistically significant differences between group means.
  • Report effect size (e.g., eta squared η²) to measure the magnitude of group differences.
  • Use post hoc tests, like Tukey’s HSD, to identify significant differences between specific pairs.
  • Including effect size and other relevant information enhances readers’ understanding.

Introduction

One way Analysis of Variance ( ANOVA ) is a statistical procedure used to determine whether there are significant differences between the means of 3 or more groups.

When writing the results of a one-way ANOVA in APA style, it is crucial to report the relevant statistical information clearly and concisely.

Step-By-Step

1.  State the one-way ANOVA purpose , describing the research question and hypothesis.

2.  Report each group’s sample size , specifying the number of participants per group.

3.  Provide each group’s mean and standard deviation , reflecting data distribution.

4.  Report the F-statistic and  degrees of freedom (between and within groups).

5.  Indicate the p-value ; values below 0.05 are generally considered statistically significant.

6.  Report effect size (e.g., eta squared (η²)) to convey the difference magnitude between groups.

7.  Interpret the results based on the F-statistic, degrees of freedom, p-value, and effect sizes.

8.  Include additional information, such as post hoc tests or graphs, if relevant.

APPLIED STATISTICS - DATA ANALYSIS

🛑 Stop Struggling with Data Analysis

Find Out How Our eBook Can Turn You into an Expert Overnight!

How to report the one-way ANOVA results in APA style?

“ This study compared the effects of three teaching methods on test performance. We assigned 60 students randomly to three groups (n = 20 per group): traditional lecture, flipped classroom, or blended learning.

Mean test scores and standard deviations were:

  • Traditional lecture group (M = 75, SD = 10).
  • Flipped classroom group (M = 85, SD = 8).
  • Blended learning group (M = 90, SD = 7).

We conducted a one-way ANOVA to compare the means of the three groups.

A one-way ANOVA revealed a significant effect of the teaching method on test performance, F(2,57) = 15.68, p < 0.001. The effect size, eta squared (η²), was 0.36, indicating a large effect.

Tukey’s HSD post hoc test showed the blended learning group scored significantly higher than both traditional lecture (p < 0.001) and flipped classroom (p < 0.01) groups. The flipped classroom group scored significantly higher than the traditional lecture group (p < 0.05).

These findings suggest that blended learning leads to the highest test performance, followed by flipped classroom, and lastly, traditional lecture. The effect size confirms these differences are practically significant. “

How to Report Effect Size in APA Style?

Besides reporting the statistical significance of the one-way ANOVA results, it is also crucial to report the effect size .

It measures the magnitude  of the relationship between the independent variable (teaching method, in this example) and the dependent variable (test performance).

It provides a way to quantify the differences between the means of the groups. It can help the audience better understand the practical significance of the results.

For a one-way ANOVA, a commonly used method to report effect size is eta squared (η²) .

Eta squared (η²) measures the total variance proportion in the dependent variable that we can attribute to the variance in the independent variable.

Once you have calculated eta squared (η²), you can use these guidelines to interpret the results:

Note:  These guidelines are not strict thresholds but should be used as a general reference to help researchers interpret the practical significance of their findings.

To report the effect size in your one-way ANOVA results in APA style, you can include the eta squared (η²) value in the results section of your paper. F or example:

“ The results revealed a significant effect of teaching method on test performance, F(2,57) = 15.68, p < 0.001. The effect size, calculated as eta squared (η²), was 0.36, indicating a large effect. “

By reporting the statistical significance and the effect size of the results, you can give the audience a complete understanding of the relationship between the tested variables.

Effectively reporting the results of a one-way ANOVA in APA style is crucial for conducting research and communicating the findings to your audience.

By adhering to the step-by-step guidelines provided in this article, you can present relevant statistical information clearly and concisely.

This entails outlining the one-way ANOVA’s purpose, detailing each group’s descriptive statistics and sample size, presenting the F-statistic and p-value, interpreting the results, and discussing any additional pertinent information, such as post hoc tests.

Moreover, it is vital to report the effect size as it conveys the strength of the relationship between the variables under investigation.

Incorporating these elements into your results section will enable readers to gain a comprehensive and well-rounded understanding of your research findings, ultimately contributing to the rigor and credibility of your study.

FAQ About One-Way ANOVA

Q1: What is a One-Way ANOVA?  One-Way ANOVA is a statistical tool used to determine whether significant differences exist between the means of three or more groups based on a single independent variable.

Q2: How do I decide when to use a One-Way ANOVA?  Use One-Way ANOVA when you have one independent variable with three or more levels (groups) and one continuous dependent variable to compare the group means.

Q3: What is the F-statistic in a One-Way ANOVA?  The F-statistic measures the variance ratio between groups to the variance within groups in a One-Way ANOVA. It helps determine if the observed differences between group means are significant.

Q4: What is the p-value in a One-Way ANOVA?  The p-value represents the probability that the observed differences between the group means could have occurred by chance. A p-value < 0.05 indicates statistically significant differences between group means.

Q5: How do I report the results of a One-Way ANOVA in APA style?  Report the purpose, sample size, descriptive statistics for each group, F-statistic, degrees of freedom, p-value, effect size, interpretation, and any additional relevant information, such as post hoc tests.

Q6: What is the effect size in a One-Way ANOVA?  The effect size measures the force of the relationship between the independent and dependent variables. For a One-Way ANOVA, eta squared (η²) is a commonly used measure of effect size.

Q7: How do I interpret the effect size in a One-Way ANOVA?  Eta squared (η²) ranges from 0 to 1, with values of 0.01-0.059, 0.06-0.0139, and 0.14-above representing small, medium, and large effect sizes, respectively.

Q8: What is a post hoc test, and when should I use one?  After a significant One-Way ANOVA result, a post hoc test is conducted to identify specific group pairs with significant differences. Use post hoc tests, such as Tukey’s HSD, when you have three or more groups.

Q9: Can I use a One-Way ANOVA for non-normal data?  One-Way ANOVA is robust against moderate deviations from normality. However, if the data are highly non-normal or have ordinal data, consider using a non-parametric alternative, such as the Kruskal-Wallis test.

Q10: What is the difference between a One-Way ANOVA and a t-test?  A t-test is used to compare the means of two groups, while a One-Way ANOVA is used to compare the means of 3 or more groups. Both tests assess whether there are significant differences between group means.

Elevate your data analysis expertise and deliver precise, impactful insights by honing your skills today!

Explore FREE samples from our recently released digital book and unleash your full potential.

Immerse yourself in advanced data analysis techniques, master the art of determining optimal sample sizes, and learn to communicate results effectively, clearly, and concisely.

Click the link to unlock a treasure trove of knowledge: Applied Statistics: Data Analysis .

Applied Statistics: Data Analysis

Can Standard Deviations Be Negative?

Connect with us on our social networks.

DAILY POSTS ON INSTAGRAM!

Similar Posts

What Does The P-Value Mean?

What Does The P-Value Mean?

Discover what the p-value means is in statistics, its role in determining significance, and how it affects your data analysis!

A Comprehensive Guide to Levels of Measurement in Data Analysis

A Comprehensive Guide to Levels of Measurement in Data Analysis

Dive deep into the foundations of data analysis, exploring the four fundamental levels of measurement: nominal, ordinal, interval, and ratio scales.

The Misconception of Peakedness in Kurtosis

The Misconception of Peakedness in Kurtosis

Explore the misconception of Kurtosis as peakedness, learn its true purpose as a tail behavior measure, understanding its applications.

Understanding Convenience Sampling: Pros, Cons, and Best Practices

Understanding Convenience Sampling: Pros, Cons, and Best Practices

Dive into the world of convenience sampling! Understand its pros, cons, and best practices in our comprehensive guide.

What is the Difference Between ANOVA and T-Test?

What is the Difference Between ANOVA and T-Test?

Learn the key differences between t-tests and ANOVA, when to use each, and avoid common errors. Explore more relevant articles on our blog.

Florence Nightingale: How Data Visualization in the Form of Pie Charts Saved Lives

Florence Nightingale: How Data Visualization in the Form of Pie Charts Saved Lives

Discover how Florence Nightingale used data visualization and pie charts to revolutionize healthcare during the Crimean War.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

one way anova research paper

  • Privacy Policy

Research Method

Home » ANOVA (Analysis of variance) – Formulas, Types, and Examples

ANOVA (Analysis of variance) – Formulas, Types, and Examples

Table of Contents

ANOVA

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) is a statistical method used to test differences between two or more means. It is similar to the t-test, but the t-test is generally used for comparing two means, while ANOVA is used when you have more than two means to compare.

ANOVA is based on comparing the variance (or variation) between the data samples to the variation within each particular sample. If the between-group variance is high and the within-group variance is low, this provides evidence that the means of the groups are significantly different.

ANOVA Terminology

When discussing ANOVA, there are several key terms to understand:

  • Factor : This is another term for the independent variable in your analysis. In a one-way ANOVA, there is one factor, while in a two-way ANOVA, there are two factors.
  • Levels : These are the different groups or categories within a factor. For example, if the factor is ‘diet’ the levels might be ‘low fat’, ‘medium fat’, and ‘high fat’.
  • Response Variable : This is the dependent variable or the outcome that you are measuring.
  • Within-group Variance : This is the variance or spread of scores within each level of your factor.
  • Between-group Variance : This is the variance or spread of scores between the different levels of your factor.
  • Grand Mean : This is the overall mean when you consider all the data together, regardless of the factor level.
  • Treatment Sums of Squares (SS) : This represents the between-group variability. It is the sum of the squared differences between the group means and the grand mean.
  • Error Sums of Squares (SS) : This represents the within-group variability. It’s the sum of the squared differences between each observation and its group mean.
  • Total Sums of Squares (SS) : This is the sum of the Treatment SS and the Error SS. It represents the total variability in the data.
  • Degrees of Freedom (df) : The degrees of freedom are the number of values that have the freedom to vary when computing a statistic. For example, if you have ‘n’ observations in one group, then the degrees of freedom for that group is ‘n-1’.
  • Mean Square (MS) : Mean Square is the average squared deviation and is calculated by dividing the sum of squares by the corresponding degrees of freedom.
  • F-Ratio : This is the test statistic for ANOVAs, and it’s the ratio of the between-group variance to the within-group variance. If the between-group variance is significantly larger than the within-group variance, the F-ratio will be large and likely significant.
  • Null Hypothesis (H0) : This is the hypothesis that there is no difference between the group means.
  • Alternative Hypothesis (H1) : This is the hypothesis that there is a difference between at least two of the group means.
  • p-value : This is the probability of obtaining a test statistic as extreme as the one that was actually observed, assuming that the null hypothesis is true. If the p-value is less than the significance level (usually 0.05), then the null hypothesis is rejected in favor of the alternative hypothesis.
  • Post-hoc tests : These are follow-up tests conducted after an ANOVA when the null hypothesis is rejected, to determine which specific groups’ means (levels) are different from each other. Examples include Tukey’s HSD, Scheffe, Bonferroni, among others.

Types of ANOVA

Types of ANOVA are as follows:

One-way (or one-factor) ANOVA

This is the simplest type of ANOVA, which involves one independent variable . For example, comparing the effect of different types of diet (vegetarian, pescatarian, omnivore) on cholesterol level.

Two-way (or two-factor) ANOVA

This involves two independent variables. This allows for testing the effect of each independent variable on the dependent variable , as well as testing if there’s an interaction effect between the independent variables on the dependent variable.

Repeated Measures ANOVA

This is used when the same subjects are measured multiple times under different conditions, or at different points in time. This type of ANOVA is often used in longitudinal studies.

Mixed Design ANOVA

This combines features of both between-subjects (independent groups) and within-subjects (repeated measures) designs. In this model, one factor is a between-subjects variable and the other is a within-subjects variable.

Multivariate Analysis of Variance (MANOVA)

This is used when there are two or more dependent variables. It tests whether changes in the independent variable(s) correspond to changes in the dependent variables.

Analysis of Covariance (ANCOVA)

This combines ANOVA and regression. ANCOVA tests whether certain factors have an effect on the outcome variable after removing the variance for which quantitative covariates (interval variables) account. This allows the comparison of one variable outcome between groups, while statistically controlling for the effect of other continuous variables that are not of primary interest.

Nested ANOVA

This model is used when the groups can be clustered into categories. For example, if you were comparing students’ performance from different classrooms and different schools, “classroom” could be nested within “school.”

ANOVA Formulas

ANOVA Formulas are as follows:

Sum of Squares Total (SST)

This represents the total variability in the data. It is the sum of the squared differences between each observation and the overall mean.

  • yi represents each individual data point
  • y_mean represents the grand mean (mean of all observations)

Sum of Squares Within (SSW)

This represents the variability within each group or factor level. It is the sum of the squared differences between each observation and its group mean.

  • yij represents each individual data point within a group
  • y_meani represents the mean of the ith group

Sum of Squares Between (SSB)

This represents the variability between the groups. It is the sum of the squared differences between the group means and the grand mean, multiplied by the number of observations in each group.

  • ni represents the number of observations in each group
  • y_mean represents the grand mean

Degrees of Freedom

The degrees of freedom are the number of values that have the freedom to vary when calculating a statistic.

For within groups (dfW):

For between groups (dfB):

For total (dfT):

  • N represents the total number of observations
  • k represents the number of groups

Mean Squares

Mean squares are the sum of squares divided by the respective degrees of freedom.

Mean Squares Between (MSB):

Mean Squares Within (MSW):

F-Statistic

The F-statistic is used to test whether the variability between the groups is significantly greater than the variability within the groups.

If the F-statistic is significantly higher than what would be expected by chance, we reject the null hypothesis that all group means are equal.

Examples of ANOVA

Examples 1:

Suppose a psychologist wants to test the effect of three different types of exercise (yoga, aerobic exercise, and weight training) on stress reduction. The dependent variable is the stress level, which can be measured using a stress rating scale.

Here are hypothetical stress ratings for a group of participants after they followed each of the exercise regimes for a period:

  • Yoga: [3, 2, 2, 1, 2, 2, 3, 2, 1, 2]
  • Aerobic Exercise: [2, 3, 3, 2, 3, 2, 3, 3, 2, 2]
  • Weight Training: [4, 4, 5, 5, 4, 5, 4, 5, 4, 5]

The psychologist wants to determine if there is a statistically significant difference in stress levels between these different types of exercise.

To conduct the ANOVA:

1. State the hypotheses:

  • Null Hypothesis (H0): There is no difference in mean stress levels between the three types of exercise.
  • Alternative Hypothesis (H1): There is a difference in mean stress levels between at least two of the types of exercise.

2. Calculate the ANOVA statistics:

  • Compute the Sum of Squares Between (SSB), Sum of Squares Within (SSW), and Sum of Squares Total (SST).
  • Calculate the Degrees of Freedom (dfB, dfW, dfT).
  • Calculate the Mean Squares Between (MSB) and Mean Squares Within (MSW).
  • Compute the F-statistic (F = MSB / MSW).

3. Check the p-value associated with the calculated F-statistic.

  • If the p-value is less than the chosen significance level (often 0.05), then we reject the null hypothesis in favor of the alternative hypothesis. This suggests there is a statistically significant difference in mean stress levels between the three exercise types.

4. Post-hoc tests

  • If we reject the null hypothesis, we conduct a post-hoc test to determine which specific groups’ means (exercise types) are different from each other.

Examples 2:

Suppose an agricultural scientist wants to compare the yield of three varieties of wheat. The scientist randomly selects four fields for each variety and plants them. After harvest, the yield from each field is measured in bushels. Here are the hypothetical yields:

The scientist wants to know if the differences in yields are due to the different varieties or just random variation.

Here’s how to apply the one-way ANOVA to this situation:

  • Null Hypothesis (H0): The means of the three populations are equal.
  • Alternative Hypothesis (H1): At least one population mean is different.
  • Calculate the Degrees of Freedom (dfB for between groups, dfW for within groups, dfT for total).
  • If the p-value is less than the chosen significance level (often 0.05), then we reject the null hypothesis in favor of the alternative hypothesis. This would suggest there is a statistically significant difference in mean yields among the three varieties.
  • If we reject the null hypothesis, we conduct a post-hoc test to determine which specific groups’ means (wheat varieties) are different from each other.

How to Conduct ANOVA

Conducting an Analysis of Variance (ANOVA) involves several steps. Here’s a general guideline on how to perform it:

  • Null Hypothesis (H0): The means of all groups are equal.
  • Alternative Hypothesis (H1): At least one group mean is different from the others.
  • The significance level (often denoted as α) is usually set at 0.05. This implies that you are willing to accept a 5% chance that you are wrong in rejecting the null hypothesis.
  • Data should be collected for each group under study. Make sure that the data meet the assumptions of an ANOVA: normality, independence, and homogeneity of variances.
  • Calculate the Degrees of Freedom (df) for each sum of squares (dfB, dfW, dfT).
  • Compute the Mean Squares Between (MSB) and Mean Squares Within (MSW) by dividing the sum of squares by the corresponding degrees of freedom.
  • Compute the F-statistic as the ratio of MSB to MSW.
  • Determine the critical F-value from the F-distribution table using dfB and dfW.
  • If the calculated F-statistic is greater than the critical F-value, reject the null hypothesis.
  • If the p-value associated with the calculated F-statistic is smaller than the significance level (0.05 typically), you reject the null hypothesis.
  • If you rejected the null hypothesis, you can conduct post-hoc tests (like Tukey’s HSD) to determine which specific groups’ means (if you have more than two groups) are different from each other.
  • Regardless of the result, report your findings in a clear, understandable manner. This typically includes reporting the test statistic, p-value, and whether the null hypothesis was rejected.

When to use ANOVA

ANOVA (Analysis of Variance) is used when you have three or more groups and you want to compare their means to see if they are significantly different from each other. It is a statistical method that is used in a variety of research scenarios. Here are some examples of when you might use ANOVA:

  • Comparing Groups : If you want to compare the performance of more than two groups, for example, testing the effectiveness of different teaching methods on student performance.
  • Evaluating Interactions : In a two-way or factorial ANOVA, you can test for an interaction effect. This means you are not only interested in the effect of each individual factor, but also whether the effect of one factor depends on the level of another factor.
  • Repeated Measures : If you have measured the same subjects under different conditions or at different time points, you can use repeated measures ANOVA to compare the means of these repeated measures while accounting for the correlation between measures from the same subject.
  • Experimental Designs : ANOVA is often used in experimental research designs when subjects are randomly assigned to different conditions and the goal is to compare the means of the conditions.

Here are the assumptions that must be met to use ANOVA:

  • Normality : The data should be approximately normally distributed.
  • Homogeneity of Variances : The variances of the groups you are comparing should be roughly equal. This assumption can be tested using Levene’s test or Bartlett’s test.
  • Independence : The observations should be independent of each other. This assumption is met if the data is collected appropriately with no related groups (e.g., twins, matched pairs, repeated measures).

Applications of ANOVA

The Analysis of Variance (ANOVA) is a powerful statistical technique that is used widely across various fields and industries. Here are some of its key applications:

Agriculture

ANOVA is commonly used in agricultural research to compare the effectiveness of different types of fertilizers, crop varieties, or farming methods. For example, an agricultural researcher could use ANOVA to determine if there are significant differences in the yields of several varieties of wheat under the same conditions.

Manufacturing and Quality Control

ANOVA is used to determine if different manufacturing processes or machines produce different levels of product quality. For instance, an engineer might use it to test whether there are differences in the strength of a product based on the machine that produced it.

Marketing Research

Marketers often use ANOVA to test the effectiveness of different advertising strategies. For example, a marketer could use ANOVA to determine whether different marketing messages have a significant impact on consumer purchase intentions.

Healthcare and Medicine

In medical research, ANOVA can be used to compare the effectiveness of different treatments or drugs. For example, a medical researcher could use ANOVA to test whether there are significant differences in recovery times for patients who receive different types of therapy.

ANOVA is used in educational research to compare the effectiveness of different teaching methods or educational interventions. For example, an educator could use it to test whether students perform significantly differently when taught with different teaching methods.

Psychology and Social Sciences

Psychologists and social scientists use ANOVA to compare group means on various psychological and social variables. For example, a psychologist could use it to determine if there are significant differences in stress levels among individuals in different occupations.

Biology and Environmental Sciences

Biologists and environmental scientists use ANOVA to compare different biological and environmental conditions. For example, an environmental scientist could use it to determine if there are significant differences in the levels of a pollutant in different bodies of water.

Advantages of ANOVA

Here are some advantages of using ANOVA:

Comparing Multiple Groups: One of the key advantages of ANOVA is the ability to compare the means of three or more groups. This makes it more powerful and flexible than the t-test, which is limited to comparing only two groups.

Control of Type I Error: When comparing multiple groups, the chances of making a Type I error (false positive) increases. One of the strengths of ANOVA is that it controls the Type I error rate across all comparisons. This is in contrast to performing multiple pairwise t-tests which can inflate the Type I error rate.

Testing Interactions: In factorial ANOVA, you can test not only the main effect of each factor, but also the interaction effect between factors. This can provide valuable insights into how different factors or variables interact with each other.

Handling Continuous and Categorical Variables: ANOVA can handle both continuous and categorical variables . The dependent variable is continuous and the independent variables are categorical.

Robustness: ANOVA is considered robust to violations of normality assumption when group sizes are equal. This means that even if your data do not perfectly meet the normality assumption, you might still get valid results.

Provides Detailed Analysis: ANOVA provides a detailed breakdown of variances and interactions between variables which can be useful in understanding the underlying factors affecting the outcome.

Capability to Handle Complex Experimental Designs: Advanced types of ANOVA (like repeated measures ANOVA, MANOVA, etc.) can handle more complex experimental designs, including those where measurements are taken on the same subjects over time, or when you want to analyze multiple dependent variables at once.

Disadvantages of ANOVA

Some limitations or disadvantages that are important to consider:

Assumptions: ANOVA relies on several assumptions including normality (the data follows a normal distribution), independence (the observations are independent of each other), and homogeneity of variances (the variances of the groups are roughly equal). If these assumptions are violated, the results of the ANOVA may not be valid.

Sensitivity to Outliers: ANOVA can be sensitive to outliers. A single extreme value in one group can affect the sum of squares and consequently influence the F-statistic and the overall result of the test.

Dichotomous Variables: ANOVA is not suitable for dichotomous variables (variables that can take only two values, like yes/no or male/female). It is used to compare the means of groups for a continuous dependent variable.

Lack of Specificity: Although ANOVA can tell you that there is a significant difference between groups, it doesn’t tell you which specific groups are significantly different from each other. You need to carry out further post-hoc tests (like Tukey’s HSD or Bonferroni) for these pairwise comparisons.

Complexity with Multiple Factors: When dealing with multiple factors and interactions in factorial ANOVA, interpretation can become complex. The presence of interaction effects can make main effects difficult to interpret.

Requires Larger Sample Sizes: To detect an effect of a certain size, ANOVA generally requires larger sample sizes than a t-test.

Equal Group Sizes: While not always a strict requirement, ANOVA is most powerful and its assumptions are most likely to be met when groups are of equal or similar sizes.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Cluster Analysis

Cluster Analysis – Types, Methods and Examples

Discriminant Analysis

Discriminant Analysis – Methods, Types and...

MANOVA

MANOVA (Multivariate Analysis of Variance) –...

Documentary Analysis

Documentary Analysis – Methods, Applications and...

Graphical Methods

Graphical Methods – Types, Examples and Guide

Substantive Framework

Substantive Framework – Types, Methods and...

  • Announcements
  • AUTHOR'S GUIDELINES

Reporting and Interpreting One-Way Analysis of Variance (ANOVA) Using a Data-Driven Example: A Practical Guide for Social Science Researchers

  • Simon NTUMI University of Education, Winneba, West Africa, Ghana

One-way ( between-groups) analysis of variance (ANOVA) is a statistical tool or procedure used to analyse variation in a response variable (continuous random variable) measured under conditions defined by discrete factors (classification variables, often with nominal levels). The tool is used to detect a difference in means of 3 or more independent groups. It compares the means of the samples or groups in order to make inferences about the population means. It can be construed as an extension of the independent t-test. Given the omnibus nature of ANOVA, it appears that most researchers in social sciences and its related fields have difficulties in reporting and interpreting ANOVA results in their studies. This paper provides detailed processes and steps on how researchers can practically analyse and interpret ANOVA in their research works. The paper expounded that in applying ANOVA in analysis, a researcher must first formulate the null and in other cases alternative hypothesis. After the data have been gathered and cleaned, the researcher must test statistical assumptions to see if the data meet those assumptions. After this, the researcher must then do the necessary statistical computations and calculate the F-ratio (ANOVA result) using a software. To this end, the researcher then compares the critical value of the F-ratio with the table value or simply look at the p -value against the established alpha. If the calculated critical value is greater than the table value, the null hypothesis will be rejected and the alternative hypothesis is upheld.

one way anova research paper

  • EndNote - EndNote format (Macintosh & Windows)
  • ProCite - RIS format (Macintosh & Windows)
  • Reference Manager - RIS format (Windows only)

The Copyright Transfer Form to ASERS Publishing (The Publisher) This form refers to the manuscript, which an author(s) was accepted for publication and was signed by all the authors. The undersigned Author(s) of the above-mentioned Paper here transfer any and all copyright-rights in and to The Paper to The Publisher. The Author(s) warrants that The Paper is based on their original work and that the undersigned has the power and authority to make and execute this assignment. It is the author's responsibility to obtain written permission to quote material that has been previously published in any form. The Publisher recognizes the retained rights noted below and grants to the above authors and employers for whom the work performed royalty-free permission to reuse their materials below. Authors may reuse all or portions of the above Paper in other works, excepting the publication of the paper in the same form. Authors may reproduce or authorize others to reproduce the above Paper for the Author's personal use or for internal company use, provided that the source and The Publisher copyright notice are mentioned, that the copies are not used in any way that implies The Publisher endorsement of a product or service of an employer, and that the copies are not offered for sale as such. Authors are permitted to grant third party requests for reprinting, republishing or other types of reuse. The Authors may make limited distribution of all or portions of the above Paper prior to publication if they inform The Publisher of the nature and extent of such limited distribution prior there to. Authors retain all proprietary rights in any process, procedure, or article of manufacture described in The Paper. This agreement becomes null and void if and only if the above paper is not accepted and published by The Publisher, or is with drawn by the author(s) before acceptance by the Publisher.

  • Come and join our team! become an author
  • Soon, we launch the books app stay tune!
  • Online support 24/7 +4077 033 6758
  • Tell Friends and get $5 a small gift for you
  • Privacy Policy
  • Customer Service
  • Refunds Politics

Mail to: [email protected]

Phone: +40754 027 417

IMAGES

  1. (PDF) One-Way Analysis of Variance (ANOVA)

    one way anova research paper

  2. One-way ANOVA research paper

    one way anova research paper

  3. One-way ANOVA test for research hypotheses

    one way anova research paper

  4. One Way Anova Model

    one way anova research paper

  5. (PDF) Understanding one-way ANOVA using conceptual figures

    one way anova research paper

  6. One-way ANOVA research paper

    one way anova research paper

VIDEO

  1. One Way ANOVA

  2. ANOVA in Excel # Biostatistics # One way ANOVA # Research methodology

  3. ANOVA one way

  4. One-Way ANOVA in SPSS

  5. Examples on one-way ANOVA# Modelquestionpaper example#2022 scheme#BMATS301#MTECH CS#

  6. Master One Way ANOVA: Step-by-Step Guide

COMMENTS

  1. Methodology and Application of One-way ANOVA

    Received October 15, 2013; R evised October 28, 2013; Accepted November 13, 2013. Abstract This paper describes the powerful statistical technique one-way ANOVA that can be used in many. engineeri ...

  2. One-way ANOVA

    Use a one-way ANOVA when you have collected data about one categorical independent variable and one quantitative dependent variable. The independent variable should have at least three levels (i.e. at least three different groups or categories). ANOVA tells you if the dependent variable changes according to the level of the independent variable.

  3. Understanding one-way ANOVA using conceptual figures

    Most readers are already aware of the fact that the most common analytical method for this is the one-way analysis of variance (ANOVA). The present article aims to examine the necessity of using a one-way ANOVA instead of simply repeating the comparisons using Student's t-test. ANOVA literally means analysis of variance, and the present article ...

  4. PDF Jean Ashby Community College of Baltimore County

    The purpose of this paper is to present a research study which compared student success in a Developmental Math course offered in three different learning environments ... Results of a one way ANOVA showed that there were significant differences between learning environments with the students in the blended courses

  5. The Complete Guide: How to Report ANOVA Results

    Here is how to report the results of the one-way ANOVA: A one-way ANOVA was performed to compare the effect of three different studying techniques on exam scores. A one-way ANOVA revealed that there was a statistically significant difference in mean exam score between at least two groups (F(2, 27) = [4.545], p = 0.02).

  6. One Way ANOVA Overview & Example

    One-way ANOVA assumes your group data follow the normal distribution. However, your groups can be skewed if your sample size is large enough because of the central limit theorem. Here are the sample size guidelines: 2 - 9 groups: At least 15 in each group. 10 - 12 groups: At least 20 per group. For one-way ANOVA, unimodal data can be mildly ...

  7. Application of one-way ANOVA in completely randomized experiments

    Abstract. This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index ...

  8. PDF Chapter 7 One-way ANOVA

    176 CHAPTER 7. ONE-WAY ANOVA 7.2 How one-way ANOVA works 7.2.1 The model and statistical hypotheses One-way ANOVA is appropriate when the following model holds. We have a single \treatment" with, say, klevels. \Treatment" may be interpreted in the loosest possible sense as any categorical explanatory variable. There is a population of

  9. One-Way ANOVA: Definition, Formula, and Example

    For the results of a one-way ANOVA to be valid, the following assumptions should be met: 1. Normality - Each sample was drawn from a normally distributed population. 2. Equal Variances - The variances of the populations that the samples come from are equal. You can use Bartlett's Test to verify this assumption.

  10. PDF ANOVA Analysis of Student Daily Test Scores in Multi-Day Test Periods

    model is appropriate. The ANOVA model provides an indication if the mean test scores for the four days are sta-tistically different based on days. Formally, the null hy-pothesis is as follows: ANOVA H1(null): No overall mean test score differ-ences between test days exist. If H1(null) is not rejected, then the results of the research

  11. PDF Chapter 4 One-Way ANOVA

    One-Way ANOVA. Abstract This chapter considers the analysis of the one-way ANOVA models orig-inally exploited by R.A. Fisher. In this and the following chapters, we apply the general theory of linear models to various special cases. This chapter considers the analysis of one-way ANOVA models. A one-way ANOVA model can be written.

  12. The one-way ANOVA test explained

    Background: Quantitative methods and statistical analysis are essential tools in nursing research, as they support researchers testing phenomena, illustrate their findings clearly and accurately, and provide explanation or generalisation of the phenomenon being investigated. The most popular inferential statistics test is the one-way analysis of variance (ANOVA), as it is the test designated ...

  13. One-Way ANOVA: Example

    The eta squared formula for one-way analysis of variance is: η 2 = SSB / SST. where SSB is the between-groups sum of squares and SST is the total sum of squares. Given this formula, we can compute eta squared for this drug dosage experiment, as shown below: η 2 = SSB / SST = 6240 / 15240 = 0.41.

  14. One-way analysis of variance

    In statistics, one-way analysis of variance (or one-way ANOVA) is a technique to compare whether two or more samples' means are significantly different (using the F distribution).This analysis of variance technique requires a numeric response variable "Y" and a single explanatory variable "X", hence "one-way".. The ANOVA tests the null hypothesis, which states that samples in all groups are ...

  15. 13.1 One-Way ANOVA

    The test uses variances to help determine if the means are equal or not. To perform a one-way ANOVA test, there are five basic assumptions to be fulfilled: Each population from which a sample is taken is assumed to be normal. All samples are randomly selected and independent. The populations are assumed to have equal standard deviations (or ...

  16. One-Way Anova

    A one-way ANOVA (analysis of variance) compares the means of two or more groups for one dependent variable. A one-way ANOVA is required when the study includes more than two groups. (In other words, a t -test cannot be used.) As with t -tests, there is one independent variable and one dependent variable. Interval dependent variables for nominal ...

  17. How to Report Results of ANOVA: A Step-by-Step Guide

    State the one-way ANOVA purpose, describing the research question and hypothesis. 2. Report each group's sample size, specifying the number of participants per group. 3. Provide each group's mean and standard deviation, reflecting data distribution. 4. Report the F-statistic and degrees of freedom (between and within groups).

  18. PDF One-Way Analysis of Variance: Comparing Several Means

    Conditions for ANOVA F Distributions and Degrees of Freedom Objectives: Describe the problem of multiple comparisons. Describe the idea of analysis of variance. Check the conditions for ANOVA. Describe the F distributions. Conduct and interpret an ANOVA F test. References: Moore, D. S., Notz, W. I, & Flinger, M. A. (2013).

  19. [PDF] One-Way ANOVA and Multiple Comparison in Public Health Research

    One-way ANOVA is an inferential statistic for analyzing the mean difference between more than two groups, conducted with one dependent variable and one independent variable. ... @inproceedings{Viroj2015OneWayAA, title={One-Way ANOVA and Multiple Comparison in Public Health Research: A Case Study of Hemorrhagic Fever Protection}, author={Jaruwan ...

  20. ANOVA (Analysis of variance)

    Types of ANOVA. Types of ANOVA are as follows: One-way (or one-factor) ANOVA. This is the simplest type of ANOVA, which involves one independent variable. For example, comparing the effect of different types of diet (vegetarian, pescatarian, omnivore) on cholesterol level. Two-way (or two-factor) ANOVA. This involves two independent variables.

  21. Urolithin A improves Alzheimer's disease cognition and restores

    Values are the mean and SEM of determinations made on 5-7 hippocampal slices from at least five different mice. EPSP, excitatory postsynaptic potential. Data were analyzed by two-way ANOVA with Tukey's multiple comparisons test and Pearson's correlation (B) or one-way ANOVA with Tukey's multiple comparisons test (C-E, G-L, N).

  22. Reporting and Interpreting One-Way Analysis of Variance (ANOVA) Using a

    This paper provides detailed processes and steps on how researchers can practically analyse and interpret ANOVA in their research works. The paper expounded that in applying ANOVA in analysis, a researcher must first formulate the null and in other cases alternative hypothesis. ... NTUMI, Simon. Reporting and Interpreting One-Way Analysis of ...

  23. KIF26B and CREB3L1 Derived from Immunoscore Could Inhibit the

    Pearson method was used for calculating the correlation coefficient. In the experimental part, FIJI (Image J2) was used for image analysis, and Graphpad Prism 9 was used for drawing statistics. The statistical method was one-way ANOVA followed by the Dunnett test for multiple comparisons.