JMP | Statistical Discovery.™ From SAS.

Statistics Knowledge Portal

A free online introduction to statistics

The One-Sample t -Test

What is the one-sample t -test.

The one-sample t-test is a statistical hypothesis test used to determine whether an unknown population mean is different from a specific value.

When can I use the test?

You can use the test for continuous data. Your data should be a random sample from a normal population.

What if my data isn’t nearly normally distributed?

If your sample sizes are very small, you might not be able to test for normality. You might need to rely on your understanding of the data. When you cannot safely assume normality, you can perform a nonparametric test that doesn’t assume normality.

Using the one-sample t -test

See how to perform a one-sample t -test using statistical software.

  • Download JMP to follow along using the sample data included with the software.
  • To see more JMP tutorials, visit the JMP Learning Library .

The sections below discuss what we need for the test, checking our data, performing the test, understanding test results and statistical details.

What do we need?

For the one-sample t -test, we need one variable.

We also have an idea, or hypothesis, that the mean of the population has some value. Here are two examples:

  • A hospital has a random sample of cholesterol measurements for men. These patients were seen for issues other than cholesterol. They were not taking any medications for high cholesterol. The hospital wants to know if the unknown mean cholesterol for patients is different from a goal level of 200 mg.
  • We measure the grams of protein for a sample of energy bars. The label claims that the bars have 20 grams of protein. We want to know if the labels are correct or not.

One-sample t -test assumptions

For a valid test, we need data values that are:

  • Independent (values are not related to one another).
  • Continuous.
  • Obtained via a simple random sample from the population.

Also, the population is assumed to be normally distributed .

One-sample t -test example

Imagine we have collected a random sample of 31 energy bars from a number of different stores to represent the population of energy bars available to the general consumer. The labels on the bars claim that each bar contains 20 grams of protein.

Table 1: Grams of protein in random sample of energy bars

Energy Bar - Grams of Protein
20.7027.4622.1519.8521.2924.75
20.7522.9125.3420.3321.5421.08
22.1419.5621.1018.0424.1219.95
19.7218.2816.2617.4620.5322.12
25.0622.4419.0819.8821.3922.3325.79

If you look at the table above, you see that some bars have less than 20 grams of protein. Other bars have more. You might think that the data support the idea that the labels are correct. Others might disagree. The statistical test provides a sound method to make a decision, so that everyone makes the same decision on the same set of data values. 

Checking the data

Let’s start by answering: Is the t -test an appropriate method to test that the energy bars have 20 grams of protein ? The list below checks the requirements for the test.

  • The data values are independent. The grams of protein in one energy bar do not depend on the grams in any other energy bar. An example of dependent values would be if you collected energy bars from a single production lot. A sample from a single lot is representative of that lot, not energy bars in general.
  • The data values are grams of protein. The measurements are continuous.
  • We assume the energy bars are a simple random sample from the population of energy bars available to the general consumer (i.e., a mix of lots of bars).
  • We assume the population from which we are collecting our sample is normally distributed, and for large samples, we can check this assumption.

We decide that the t -test is an appropriate method.

Before jumping into analysis, we should take a quick look at the data. The figure below shows a histogram and summary statistics for the energy bars.

Histogram and summary statistics for the grams of protein in energy bars

From a quick look at the histogram, we see that there are no unusual points, or outliers . The data look roughly bell-shaped, so our assumption of a normal distribution seems reasonable.

From a quick look at the statistics, we see that the average is 21.40, above 20. Does this  average from our sample of 31 bars invalidate the label's claim of 20 grams of protein for the unknown entire population mean? Or not?

How to perform the one-sample t -test

For the t -test calculations we need the mean, standard deviation and sample size. These are shown in the summary statistics section of Figure 1 above.

We round the statistics to two decimal places. Software will show more decimal places, and use them in calculations. (Note that Table 1 shows only two decimal places; the actual data used to calculate the summary statistics has more.)

We start by finding the difference between the sample mean and 20:

$ 21.40-20\ =\ 1.40$

Next, we calculate the standard error for the mean. The calculation is:

Standard Error for the mean = $ \frac{s}{\sqrt{n}}= \frac{2.54}{\sqrt{31}}=0.456 $

This matches the value in Figure 1 above.

We now have the pieces for our test statistic. We calculate our test statistic as:

$ t =  \frac{\text{Difference}}{\text{Standard Error}}= \frac{1.40}{0.456}=3.07 $

To make our decision, we compare the test statistic to a value from the t- distribution. This activity involves four steps.

  • We calculate a test statistic. Our test statistic is 3.07.
  • We decide on the risk we are willing to take for declaring a difference when there is not a difference. For the energy bar data, we decide that we are willing to take a 5% risk of saying that the unknown population mean is different from 20 when in fact it is not. In statistics-speak, we set α = 0.05. In practice, setting your risk level (α) should be made before collecting the data.

We find the value from the t- distribution based on our decision. For a t -test, we need the degrees of freedom to find this value. The degrees of freedom are based on the sample size. For the energy bar data:

degrees of freedom = $ n - 1 = 31 - 1 = 30 $

The critical value of t with α = 0.05 and 30 degrees of freedom is +/- 2.043. Most statistics books have look-up tables for the distribution. You can also find tables online. The most likely situation is that you will use software and will not use printed tables.

We compare the value of our statistic (3.07) to the t value. Since 3.07 > 2.043, we reject the null hypothesis that the mean grams of protein is equal to 20. We make a practical conclusion that the labels are incorrect, and the population mean grams of protein is greater than 20.

Statistical details

Let’s look at the energy bar data and the 1-sample t -test using statistical terms.

Our null hypothesis is that the underlying population mean is equal to 20. The null hypothesis is written as:

$ H_o:  \mathrm{\mu} = 20 $

The alternative hypothesis is that the underlying population mean is not equal to 20. The labels claiming 20 grams of protein would be incorrect. This is written as:

$ H_a:  \mathrm{\mu} ≠ 20 $

This is a two-sided test. We are testing if the population mean is different from 20 grams in either direction. If we can reject the null hypothesis that the mean is equal to 20 grams, then we make a practical conclusion that the labels for the bars are incorrect. If we cannot reject the null hypothesis, then we make a practical conclusion that the labels for the bars may be correct.

We calculate the average for the sample and then calculate the difference with the population mean, mu:

$  \overline{x} - \mathrm{\mu} $

We calculate the standard error as:

$ \frac{s}{ \sqrt{n}} $

The formula shows the sample standard deviation as s and the sample size as n .  

The test statistic uses the formula shown below:

$  \dfrac{\overline{x} - \mathrm{\mu}} {s / \sqrt{n}} $

We compare the test statistic to a t value with our chosen alpha value and the degrees of freedom for our data. Using the energy bar data as an example, we set α = 0.05. The degrees of freedom ( df ) are based on the sample size and are calculated as:

$ df = n - 1 = 31 - 1 = 30 $

Statisticians write the t value with α = 0.05 and 30 degrees of freedom as:

$ t_{0.05,30} $

The t value for a two-sided test with α = 0.05 and 30 degrees of freedom is +/- 2.042. There are two possible results from our comparison:

  • The test statistic is less extreme than the critical  t  values; in other words, the test statistic is not less than -2.042, or is not greater than +2.042. You fail to reject the null hypothesis that the mean is equal to the specified value. In our example, you would be unable to conclude that the label for the protein bars should be changed.
  • The test statistic is more extreme than the critical  t  values; in other words, the test statistic is less than -2.042, or is greater than +2.042. You reject the null hypothesis that the mean is equal to the specified value. In our example, you conclude that either the label should be updated or the production process should be improved to produce, on average, bars with 20 grams of protein.

Testing for normality

The normality assumption is more important for small sample sizes than for larger sample sizes.

Normal distributions are symmetric, which means they are “even” on both sides of the center. Normal distributions do not have extreme values, or outliers. You can check these two features of a normal distribution with graphs. Earlier, we decided that the energy bar data was “close enough” to normal to go ahead with the assumption of normality. The figure below shows a normal quantile plot for the data, and supports our decision.

Normal quantile plot for energy bar data

You can also perform a formal test for normality using software. The figure below shows results of testing for normality with JMP software. We cannot reject the hypothesis of a normal distribution. 

Testing for normality using JMP software

We can go ahead with the assumption that the energy bar data is normally distributed.

What if my data are not from a Normal distribution?

If your sample size is very small, it is hard to test for normality. In this situation, you might need to use your understanding of the measurements. For example, for the energy bar data, the company knows that the underlying distribution of grams of protein is normally distributed. Even for a very small sample, the company would likely go ahead with the t -test and assume normality.

What if you know the underlying measurements are not normally distributed? Or what if your sample size is large and the test for normality is rejected? In this situation, you can use a nonparametric test. Nonparametric  analyses do not depend on an assumption that the data values are from a specific distribution. For the one-sample t ­-test, the one possible nonparametric test is the Wilcoxon Signed Rank test. 

Understanding p-values

Using a visual, you can check to see if your test statistic is more extreme than a specified value in the distribution. The figure below shows a t- distribution with 30 degrees of freedom.

t-distribution with 30 degrees of freedom and α = 0.05

Since our test is two-sided and we set α = 0.05, the figure shows that the value of 2.042 “cuts off” 5% of the data in the tails combined.

The next figure shows our results. You can see the test statistic falls above the specified critical value. It is far enough “out in the tail” to reject the hypothesis that the mean is equal to 20.

Our results displayed in a t-distribution with 30 degrees of freedom

Putting it all together with Software

You are likely to use software to perform a t -test. The figure below shows results for the 1-sample t -test for the energy bar data from JMP software.  

One-sample t-test results for energy bar data using JMP software

The software shows the null hypothesis value of 20 and the average and standard deviation from the data. The test statistic is 3.07. This matches the calculations above.

The software shows results for a two-sided test and for one-sided tests. We want the two-sided test. Our null hypothesis is that the mean grams of protein is equal to 20. Our alternative hypothesis is that the mean grams of protein is not equal to 20.  The software shows a p- value of 0.0046 for the two-sided test. This p- value describes the likelihood of seeing a sample average as extreme as 21.4, or more extreme, when the underlying population mean is actually 20; in other words, the probability of observing a sample mean as different, or even more different from 20, than the mean we observed in our sample. A p -value of 0.0046 means there is about 46 chances out of 10,000. We feel confident in rejecting the null hypothesis that the population mean is equal to 20.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

An Introduction to t Tests | Definitions, Formula and Examples

Published on January 31, 2020 by Rebecca Bevans . Revised on June 22, 2023.

A t test is a statistical test that is used to compare the means of two groups. It is often used in hypothesis testing to determine whether a process or treatment actually has an effect on the population of interest, or whether two groups are different from one another.

  • The null hypothesis ( H 0 ) is that the true difference between these group means is zero.
  • The alternate hypothesis ( H a ) is that the true difference is different from zero.

Table of contents

When to use a t test, what type of t test should i use, performing a t test, interpreting test results, presenting the results of a t test, other interesting articles, frequently asked questions about t tests.

A t test can only be used when comparing the means of two groups (a.k.a. pairwise comparison). If you want to compare more than two groups, or if you want to do multiple pairwise comparisons, use an   ANOVA test  or a post-hoc test.

The t test is a parametric test of difference, meaning that it makes the same assumptions about your data as other parametric tests. The t test assumes your data:

  • are independent
  • are (approximately) normally distributed
  • have a similar amount of variance within each group being compared (a.k.a. homogeneity of variance)

If your data do not fit these assumptions, you can try a nonparametric alternative to the t test, such as the Wilcoxon Signed-Rank test for data with unequal variances .

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

When choosing a t test, you will need to consider two things: whether the groups being compared come from a single population or two different populations, and whether you want to test the difference in a specific direction.

What type of t-test should I use

One-sample, two-sample, or paired t test?

  • If the groups come from a single population (e.g., measuring before and after an experimental treatment), perform a paired t test . This is a within-subjects design .
  • If the groups come from two different populations (e.g., two different species, or people from two separate cities), perform a two-sample t test (a.k.a. independent t test ). This is a between-subjects design .
  • If there is one group being compared against a standard value (e.g., comparing the acidity of a liquid to a neutral pH of 7), perform a one-sample t test .

One-tailed or two-tailed t test?

  • If you only care whether the two populations are different from one another, perform a two-tailed t test .
  • If you want to know whether one population mean is greater than or less than the other, perform a one-tailed t test.
  • Your observations come from two separate populations (separate species), so you perform a two-sample t test.
  • You don’t care about the direction of the difference, only whether there is a difference, so you choose to use a two-tailed t test.

The t test estimates the true difference between two group means using the ratio of the difference in group means over the pooled standard error of both groups. You can calculate it manually using a formula, or use statistical analysis software.

T test formula

The formula for the two-sample t test (a.k.a. the Student’s t-test) is shown below.

\begin{equation*}t=\dfrac{\bar{x}_{1}-\bar{x}_{2}}{\sqrt{(s^2(\frac{1}{n_{1}}+\frac{1}{n_{2}}))}}}\end{equation*}

In this formula, t is the t value, x 1 and x 2 are the means of the two groups being compared, s 2 is the pooled standard error of the two groups, and n 1 and n 2 are the number of observations in each of the groups.

A larger t value shows that the difference between group means is greater than the pooled standard error, indicating a more significant difference between the groups.

You can compare your calculated t value against the values in a critical value chart (e.g., Student’s t table) to determine whether your t value is greater than what would be expected by chance. If so, you can reject the null hypothesis and conclude that the two groups are in fact different.

T test function in statistical software

Most statistical software (R, SPSS, etc.) includes a t test function. This built-in function will take your raw data and calculate the t value. It will then compare it to the critical value, and calculate a p -value . This way you can quickly see whether your groups are statistically different.

In your comparison of flower petal lengths, you decide to perform your t test using R. The code looks like this:

Download the data set to practice by yourself.

Sample data set

If you perform the t test for your flower hypothesis in R, you will receive the following output:

T-test output in R

The output provides:

  • An explanation of what is being compared, called data in the output table.
  • The t value : -33.719. Note that it’s negative; this is fine! In most cases, we only care about the absolute value of the difference, or the distance from 0. It doesn’t matter which direction.
  • The degrees of freedom : 30.196. Degrees of freedom is related to your sample size, and shows how many ‘free’ data points are available in your test for making comparisons. The greater the degrees of freedom, the better your statistical test will work.
  • The p value : 2.2e-16 (i.e. 2.2 with 15 zeros in front). This describes the probability that you would see a t value as large as this one by chance.
  • A statement of the alternative hypothesis ( H a ). In this test, the H a is that the difference is not 0.
  • The 95% confidence interval . This is the range of numbers within which the true difference in means will be 95% of the time. This can be changed from 95% if you want a larger or smaller interval, but 95% is very commonly used.
  • The mean petal length for each group.

When reporting your t test results, the most important values to include are the t value , the p value , and the degrees of freedom for the test. These will communicate to your audience whether the difference between the two groups is statistically significant (a.k.a. that it is unlikely to have happened by chance).

You can also include the summary statistics for the groups being compared, namely the mean and standard deviation . In R, the code for calculating the mean and the standard deviation from the data looks like this:

flower.data %>% group_by(Species) %>% summarize(mean_length = mean(Petal.Length), sd_length = sd(Petal.Length))

In our example, you would report the results like this:

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis

Methodology

  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

A t-test is a statistical test that compares the means of two samples . It is used in hypothesis testing , with a null hypothesis that the difference in group means is zero and an alternate hypothesis that the difference in group means is different from zero.

A t-test measures the difference in group means divided by the pooled standard error of the two group means.

In this way, it calculates a number (the t-value) illustrating the magnitude of the difference between the two group means being compared, and estimates the likelihood that this difference exists purely by chance (p-value).

Your choice of t-test depends on whether you are studying one group or two groups, and whether you care about the direction of the difference in group means.

If you are studying one group, use a paired t-test to compare the group mean over time or after an intervention, or use a one-sample t-test to compare the group mean to a standard value. If you are studying two groups, use a two-sample t-test .

If you want to know only whether a difference exists, use a two-tailed test . If you want to know if one group mean is greater or less than the other, use a left-tailed or right-tailed one-tailed test .

A one-sample t-test is used to compare a single population to a standard value (for example, to determine whether the average lifespan of a specific town is different from the country average).

A paired t-test is used to compare a single population before and after some experimental intervention or at two different points in time (for example, measuring student performance on a test before and after being taught the material).

A t-test should not be used to measure differences among more than two groups, because the error structure for a t-test will underestimate the actual error when many groups are being compared.

If you want to compare the means of several groups at once, it’s best to use another statistical test such as ANOVA or a post-hoc test.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). An Introduction to t Tests | Definitions, Formula and Examples. Scribbr. Retrieved September 21, 2024, from https://www.scribbr.com/statistics/t-test/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, hypothesis testing | a step-by-step guide with easy examples, test statistics | definition, interpretation, and examples, what is your plagiarism score.

MLP Logo

One Sample T Test – Clearly Explained with Examples | ML+

  • October 8, 2020
  • Selva Prabhakaran

One sample T-Test tests if the given sample of observations could have been generated from a population with a specified mean.

If it is found from the test that the means are statistically different, we infer that the sample is unlikely to have come from the population.

For example: If you want to test a car manufacturer’s claim that their cars give a highway mileage of 20kmpl on an average. You sample 10 cars from the dealership, measure their mileage and use the T-test to determine if the manufacturer’s claim is true.

By end of this, you will know when and how to do the T-Test, the concept, math, how to set the null and alternate hypothesis, how to use the T-tables, how to understand the one-tailed and two-tailed T-Test and see how to implement in R and Python using a practical example.

null hypothesis one sample t test

Introduction

Purpose of one sample t test, how to set the null and alternate hypothesis, procedure to do one sample t test, one sample t test example, one sample t test implementation, how to decide which t test to perform two tailed, upper tailed or lower tailed.

  • Related Posts

The ‘One sample T Test’ is one of the 3 types of T Tests . It is used when you want to test if the mean of the population from which the sample is drawn is of a hypothesized value. You will understand this statement better (and all of about One Sample T test) better by the end of this post.

T Test was first invented by William Sealy Gosset, in 1908. Since he used the pseudo name as ‘Student’ when publishing his method in the paper titled ‘Biometrika’, the test came to be know as Student’s T Test.

Since it assumes that the test statistic, typically the sample mean, follows the sampling distribution, the Student’s T Test is considered as a Parametric test.

The purpose of the One Sample T Test is to determine if a sample observations could have come from a process that follows a specific parameter (like the mean).

It is typically implemented on small samples.

For example, given a sample of 15 items, you want to test if the sample mean is the same as a hypothesized mean (population). That is, essentially you want to know if the sample came from the given population or not.

Let’s suppose, you want to test if the mean weight of a manufactured component (from a sample size 15) is of a particular value (55 grams), with a 99% confidence.

Image showing manufacturing quality testing

How did we determine One sample T-test is the right test for this?

null hypothesis one sample t test

Because, there is only one sample involved and you want to compare the mean of this sample against a particular (hypothesized) value..

To do this, you need to set up a null hypothesis and an alternate hypothesis .

The null hypothesis usually assumes that there is no difference in the sample means and the hypothesized mean (comparison mean). The purpose of the T Test is to test if the null hypothesis can be rejected or not.

Depending on the how the problem is stated, the alternate hypothesis can be one of the following 3 cases:

  • Case 1: H1 : x̅ != µ. Used when the true sample mean is not equal to the comparison mean. Use Two Tailed T Test.
  • Case 2: H1 : x̅ > µ. Used when the true sample mean is greater than the comparison mean. Use Upper Tailed T Test.
  • Case 3: H1 : x̅ < µ. Used when the true sample mean is lesser than the comparison mean. Use Lower Tailed T Test.

Where x̅ is the sample mean and µ is the population mean for comparison. We will go more into the detail of these three cases after solving some practical examples.

Example 1: A customer service company wants to know if their support agents are performing on par with industry standards.

According to a report the standard mean resolution time is 20 minutes per ticket. The sample group has a mean at 21 minutes per ticket with a standard deviation of 7 minutes.

Can you tell if the company’s support performance is better than the industry standard or not?

Example 2: A farming company wants to know if a new fertilizer has improved crop yield or not.

Historic data shows the average yield of the farm is 20 tonne per acre. They decide to test a new organic fertilizer on a smaller sample of farms and observe the new yield is 20.175 tonne per acre with a standard deviation of 3.02 tonne for 12 different farms.

Did the new fertilizer work?

Step 1: Define the Null Hypothesis (H0) and Alternate Hypothesis (H1)

H0: Sample mean (x̅) = Hypothesized Population mean (µ)

H1: Sample mean (x̅) != Hypothesized Population mean (µ)

The alternate hypothesis can also state that the sample mean is greater than or less than the comparison mean.

Step 2: Compute the test statistic (T)

$$t = \frac{Z}{s} = \frac{\bar{X} – \mu}{\frac{\hat{\sigma}}{\sqrt{n}}}$$

where s is the standard error .

Step 3: Find the T-critical from the T-Table

Use the degree of freedom and the alpha level (0.05) to find the T-critical.

Step 4: Determine if the computed test statistic falls in the rejection region.

Alternately, simply compute the P-value. If it is less than the significance level (0.05 or 0.01), reject the null hypothesis.

Problem Statement:

We have the potato yield from 12 different farms. We know that the standard potato yield for the given variety is µ=20.

Test if the potato yield from these farms is significantly better than the standard yield.

Step 1: Define the Null and Alternate Hypothesis

H0: x̅ = 20

H1: x̅ > 20

n = 12. Since this is one sample T test, the degree of freedom = n-1 = 12-1 = 11.

Let’s set alpha = 0.05, to meet 95% confidence level.

Step 2: Calculate the Test Statistic (T) 1. Calculate sample mean

$$\bar{X} = \frac{x_1 + x_2 + x_3 + . . + x_n}{n}$$

$$\bar{x} = 20.175$$

  • Calculate sample standard deviation

$$\bar{\sigma} = \frac{(x_1 – \bar{x})^2 + (x_2 – \bar{x})^2 + (x_3 – \bar{x})^2 + . . + (x_n – \bar{x})^2}{n-1}$$

$$\sigma = 3.0211$$

  • Substitute in the T Statistic formula

$$T = \frac{\bar{x} – \mu}{se} = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}$$

$$T = (20.175 – 20)/(3.0211/\sqrt{12}) = 0.2006$$

Step 3: Find the T-Critical

Confidence level = 0.95, alpha=0.05. For one tailed test, look under 0.05 column. For d.o.f = 12 – 1 = 11, T-Critical = 1.796 .

Now you might wonder why ‘One Tailed test’ was chosen. This is because of the way you define the alternate hypothesis. Had the null hypothesis simply stated that the sample means is not equal to 20, then we would have gone for a two tailed test. More details about this topic in the next section.

Image showing T-Table for one sample T Test

Step 4: Does it fall in rejection region?

Since the computed T Statistic is less than the T-critical, it does not fall in the rejection region.

Image showing one-tailed T Test

Clearly, the calculated T statistic does not fall in the rejection region. So, we do not reject the null hypothesis.

Since you want to perform a ‘One Tailed Greater than’ test (that is, the sample mean is greater than the comparison mean), you need to specify alternative='greater' in the t.test() function. Because, by default, the t.test() does a two tailed test (which is what you do when your alternate hypothesis simply states sample mean != comparison mean).

The P-value computed here is nothing but p = Pr(T > t) (upper-tailed), where t is the calculated T statistic.

Image showing T-Distribution for P-value Computation for One Sample T-Test

In Python, One sample T Test is implemented in ttest_1samp() function in the scipy package. However, it does a Two tailed test by default , and reports a signed T statistic. That means, the reported P-value will always be computed for a Two-tailed test. To calculate the correct P value, you need to divide the output P-value by 2.

Apply the following logic if you are performing a one tailed test:

For greater than test: Reject H0 if p/2 < alpha (0.05). In this case, t will be greater than 0. For lesser than test: Reject H0 if p/2 < alpha (0.05). In this case, t will be less than 0.

Since it is one tailed test, the real p-value is 0.8446/2 = 0.4223. We do not rejecting the Null Hypothesis anyway.

The decision of whether the computed test statistic falls in the rejection region depends on how the alternate hypothesis is defined.

We know the Null Hypothesis is H0: µD = 0. Where, µD is the difference in the means, that is sample mean minus the comparison mean.

You can also write H0 as: x̅ = µ , where x̅ is sample mean and ‘µ’ is the comparison mean.

Case 1: If H1 : x̅ != µ , then rejection region lies on both tails of the T-Distribution (two-tailed). This means the alternate hypothesis just states the difference in means is not equal. There is no comparison if one of the means is greater or lesser than the other.

In this case, use Two Tailed T Test .

Here, P value = 2 . Pr(T > | t |)

Image showing two-tailed-test

Case 2: If H1: x̅ > µ , then rejection region lies on upper tail of the T-Distribution (upper-tailed). If the mean of the sample of interest is greater than the comparison mean. Example: If Component A has a longer time-to-failure than Component B.

In such case, use Upper Tailed based test.

Here, P-value = Pr(T > t)

Image showing upper tailed T-Distribution

Case 3: If H1: x̅ < µ , then rejection region lies on lower tail of the T-Distribution (lower-tailed). If the mean of the sample of interest is lesser than the comparison mean.

In such case, use lower tailed test.

Here, P-value = Pr(T < t)

Image showing T-Distribution for Lower Tailed T-Test

Hope you are now familiar and clear about with the One Sample T Test. If some thing is still not clear, write in comment. Next, topic is Two sample T test . Stay tuned.

More Articles

F statistic formula – explained, correlation – connecting the dots, the role of correlation in data analysis, hypothesis testing – a deep dive into hypothesis testing, the backbone of statistical inference, sampling and sampling distributions – a comprehensive guide on sampling and sampling distributions, law of large numbers – a deep dive into the world of statistics, central limit theorem – a deep dive into central limit theorem and its significance in statistics, similar articles, complete introduction to linear regression in r, how to implement common statistical significance tests and find the p value, logistic regression – a complete tutorial with examples in r.

Subscribe to Machine Learning Plus for high value data science content

© Machinelearningplus. All rights reserved.

null hypothesis one sample t test

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free sample videos:.

null hypothesis one sample t test

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

How t-Tests Work: 1-sample, 2-sample, and Paired t-Tests

By Jim Frost 15 Comments

T-tests are statistical hypothesis tests that analyze one or two sample means. When you analyze your data with any t-test, the procedure reduces your entire sample to a single value, the t-value. In this post, I describe how each type of t-test calculates the t-value. I don’t explain this just so you can understand the calculation, but I describe it in a way that really helps you grasp how t-tests work.

A fancy looking letter T for t-tests.

How 1-Sample t-Tests Calculate t-Values

The equation for how the 1-sample t-test produces a t-value based on your sample is below:

T-value formula for 1-sample t-test.

This equation is a ratio, and a common analogy is the signal-to-noise ratio. The numerator is the signal in your sample data, and the denominator is the noise. Let’s see how t-tests work by comparing the signal to the noise!

The Signal – The Size of the Sample Effect

In the signal-to-noise analogy, the numerator of the ratio is the signal. The effect that is present in the sample is the signal. It’s a simple calculation. In a 1-sample t-test, the sample effect is the sample mean minus the value of the null hypothesis. That’s the top part of the equation.

For example, if the sample mean is 20 and the null value is 5, the sample effect size is 15. We’re calling this the signal because this sample estimate is our best estimate of the population effect.

The calculation for the signal portion of t-values is such that when the sample effect equals zero, the numerator equals zero, which in turn means the t-value itself equals zero. The estimated sample effect (signal) equals zero when there is no difference between the sample mean and the null hypothesis value. For example, if the sample mean is 5 and the null value is 5, the signal equals zero (5 – 5 = 0).

The size of the signal increases when the difference between the sample mean and null value increases. The difference can be either negative or positive, depending on whether the sample mean is greater than or less than the value associated with the null hypothesis.

A relatively large signal in the numerator produces t-values that are further away from zero.

Photograph of a large crowd cheering.

The Noise – The Variability or Random Error in the Sample

The denominator of the ratio is the standard error of the mean, which measures the sample variation. The standard error of the mean represents how much random error is in the sample and how well the sample estimates the population mean.

As the value of this statistic increases, the sample mean provides a less precise estimate of the population mean. In other words, high levels of random error increase the probability that your sample mean is further away from the population mean.

In our analogy, random error represents noise. Why? When there is more random error, you are more likely to see considerable differences between the sample mean and the null hypothesis value in cases where  the null is true . Noise appears in the denominator to provide a benchmark for how large the signal must be to distinguish from the noise.

Signal-to-Noise ratio

Our signal-to-noise ratio analogy equates to:

T-value as the signal to noise ratio.

Both of these statistics are in the same units as your data. Let’s calculate a couple of t-values to see how to interpret them.

  • If the signal is 10 and the noise is 2, your t-value is 5. The signal is 5 times the noise.
  • If the signal is 10 and the noise is 5, your t-value is 2. The signal is 2 times the noise.

The signal is the same in both examples, but it is easier to distinguish from the lower amount of noise in the first example. In this manner, t-values indicate how clear the signal is from the noise. If the signal is of the same general magnitude as the noise, it’s probable that random error causes the difference between the sample mean and null value rather than an actual population effect.

Paired t-Tests Are Really 1-Sample t-Tests

Paired t-tests require dependent samples. I’ve seen a lot of confusion over how a paired t-test works and when you should use it. Pssst! Here’s a secret! Paired t-tests and 1-sample t-tests are the same hypothesis test incognito!

You use a 1-sample t-test to assess the difference between a sample mean and the value of the null hypothesis.

A paired t-test takes paired observations (like before and after), subtracts one from the other, and conducts a 1-sample t-test on the differences. Typically, a paired t-test determines whether the paired differences are significantly different from zero.

Download the CSV data file to check this yourself: T-testData . All of the statistical results are the same when you perform a paired t-test using the Before and After columns versus performing a 1-sample t-test on the Differences column.

Image of a worksheet with data for a paired t-test.

Once you realize that paired t-tests are the same as 1-sample t-tests on paired differences, you can focus on the deciding characteristic —does it make sense to analyze the differences between two columns?

Suppose the Before and After columns contain test scores and there was an intervention in between. If each row in the data contains the same subject in the Before and After column, it makes sense to find the difference between the columns because it represents how much each subject changed after the intervention. The paired t-test is a good choice.

On the other hand, if a row has different subjects in the Before and After columns, it doesn’t make sense to subtract the columns. You should use the 2-sample t-test described below.

The paired t-test is a convenience for you. It eliminates the need for you to calculate the difference between two columns yourself. Remember, double-check that this difference is meaningful! If using a paired t-test is valid, you should use it because it provides more statistical power than the 2-sample t-test, which I discuss in my post about independent and dependent samples .

How Two-Sample T-tests Calculate T-Values

Use the 2-sample t-test when you want to analyze the difference between the means of two independent samples. This test is also known as the independent samples t-test . Click the link to learn more about its hypotheses, assumptions, and interpretations.

Like the other t-tests, this procedure reduces all of your data to a single t-value in a process similar to the 1-sample t-test. The signal-to-noise analogy still applies.

Here’s the equation for the t-value in a 2-sample t-test.

T-value formula for the 2-sample t-test.

The equation is still a ratio, and the numerator still represents the signal. For a 2-sample t-test, the signal, or effect, is the difference between the two sample means. This calculation is straightforward. If the first sample mean is 20 and the second mean is 15, the effect is 5.

Typically, the null hypothesis states that there is no difference between the two samples. In the equation, if both groups have the same mean, the numerator, and the ratio as a whole, equals zero. Larger differences between the sample means produce stronger signals.

The denominator again represents the noise for a 2-sample t-test. However, you can use two different values depending on whether you assume that the variation in the two groups is equal or not. Most statistical software let you choose which value to use.

Regardless of the denominator value you use, the 2-sample t-test works by determining how distinguishable the signal is from the noise. To ascertain that the difference between means is statistically significant, you need a high positive or negative t-value.

How Do T-tests Use T-values to Determine Statistical Significance?

Here’s what we’ve learned about the t-values for the 1-sample t-test, paired t-test, and 2-sample t-test:

  • Each test reduces your sample data down to a single t-value based on the ratio of the effect size to the variability in your sample.
  • A t-value of zero indicates that your sample results match the null hypothesis precisely.
  • Larger absolute t-values represent stronger signals, or effects, that stand out more from the noise.

For example, a t-value of 2 indicates that the signal is twice the magnitude of the noise.

Great … but how do you get from that to determining whether the effect size is statistically significant? After all, the purpose of t-tests is to assess hypotheses. To find out, read the companion post to this one: How t-Tests Work: t-Values, t-Distributions and Probabilities . Click here for step-by-step instructions on how to do t-tests in Excel !

If you’d like to learn about other hypothesis tests using the same general approach, read my posts about:

  • How F-tests Work in ANOVA
  • How Chi-Squared Tests of Independence Work

Share this:

null hypothesis one sample t test

Reader Interactions

' src=

January 9, 2023 at 11:11 am

Hi Jim, thank you for explaining this I will revert to this during my 8 weeks in class everyday to make sure I understand what I’m doing . May I ask more questions in the future.

' src=

November 27, 2021 at 1:37 pm

This was an awesome piece, very educative and easy to understand

' src=

June 19, 2021 at 1:53 pm

Hi Jim, I found your posts very helpful. Could you plz explain how to do T test for a panel data?

' src=

June 19, 2021 at 3:40 pm

You’re limited by what you can do with t-tests. For panel data and t-tests, you can compare the same subjects at two points in time using a paired t-test. For more complex arrangements, you can use repeated measures ANOVA or specify a regression model to meet your needs.

' src=

February 11, 2020 at 10:34 pm

Hi Jim: I was reviewing this post in preparation for an analysis I plan to do, and I’d like to ask your advice. Each year, staff complete an all-employee survey, and results are reported at workgroup level of analysis. I would like to compare mean scores of several workgroups from one year to the next (in this case, 2018 and 2019 scores). For example, I would compare workgroup mean scores on psychological safety between 2018 and 2019. I am leaning toward a paired t test. However, my one concern is that….even though I am comparing workgroup to workgroup from one year to the next….it is certainly possible that there may be some different employees in a given workgroup from one year to the next (turnover, transition, etc.)….Assuming that is the case with at least some of the workgroups, does that make a paired t test less meanginful? Would I still use a paired t test or would another type t test be more appropriate? I’m thinking because we are dealing with workgroup mean scores (and not individual scores), then it may still be okay to compare meaningfully (avoiding an ecological fallacy). Thoughts?

Many thanks for these great posts. I enjoy reading them…!

' src=

April 8, 2019 at 11:22 pm

Hi jim. First of all, I really appreciate your posts!

When I use t-test via R or scikit learn, there is an option for homogeneity of variance. I think that option only applied to two sample t-test, but what should I do for that option?

Should I always perform f-test for check the homogeneity of variance? or Which one is a more strict assumption?

' src=

November 9, 2018 at 12:03 am

This blog is great. I’m at Stanford and can say this is a great supplement to class lectures. I love the fact that there aren’t formulas so as to get an intuitive feel. Thank you so much!

November 9, 2018 at 9:12 am

Thanks Mel! I’m glad it has been helpful! Your kind words mean a lot to me because I really strive to make these topics as easy to understand as possible!

' src=

December 29, 2017 at 4:14 pm

Thank you so much Jim! I have such a hard time understanding statistics without people like you who explain it using words to help me conceptualize rather than utilizing symbols only!

December 29, 2017 at 4:56 pm

Thank you, Jessica! Your kind words made my day. That’s what I want my blog to be all about. Providing simple but 100% accurate explanations for statistical concepts!

Happy New Year!

' src=

October 22, 2017 at 2:38 pm

Hi Jim, sure, I’ll go through it…Thank you..!

October 22, 2017 at 4:50 am

In summary, the t test tells, how the sample mean is different from null hypothesis, i.e. how the sample mean is different from null, but how does it comment about the significance? Is it like “more far from null is the more significant”? If it is so, could you give some more explanation about it?

October 22, 2017 at 2:30 pm

Hi Omkar, you’re in luck, I’ve written an entire blog post that talks about how t-tests actually use the t-values to determine statistical significance. In general, the further away from zero, the more significant it is. For all the information, read this post: How t-Tests Work: t-Values, t-Distributions, and Probabilities . I think this post will answer your questions.

' src=

September 12, 2017 at 2:46 am

Excellent explanation, appreciate you..!!

September 12, 2017 at 8:48 am

Thank you, Santhosh! I’m glad you found it helpful!

Comments and Questions Cancel reply

Calcworkshop

One Sample T Test Easily Explained w/ 5+ Examples!

// Last Updated: October 9, 2020 - Watch Video //

Did you know that a hypothesis test for a sample mean is the same thing as a one sample t-test?

Jenn (B.S., M.Ed.) of Calcworkshop® teaching one sample t test

Jenn, Founder Calcworkshop ® , 15+ Years Experience (Licensed & Certified Teacher)

Learn the how-to with 5 step-by-step examples.

Let’s go!

What is a One Sample T Test?

A one sample t-test determines whether or not the sample mean is statistically different (statistically significant) from a population mean.

While significance tests for population proportions are based on z-scores and the normal distribution, hypothesis testing for population means depends on whether or not the population standard deviation is known or unknown.

For a one sample t test, we compare a test variable against a test value. And depending on whether or not we know the population standard deviation will determine what type of test variable we calculate.

T Test Vs. Z Test

So, determining whether or not to use a z-test or a t-test comes down to four things:

  • Are we are working with a proportion (z-test) or mean (z-test or t-test)?
  • Do you know the population standard deviation (z-test)?
  • Is the population normally distributed (z-test)?
  • What is the sample size? If the sample is less than 30 (t-test), if the sample is larger than 30 we can apply the central limit theorem as population is approximately normally.

How To Calculate a Test Statistic

Standard deviation known.

If the population standard deviation is known , then our significance test will follow a z-value. And as we learned while conducting confidence intervals, if our sample size is larger than 30, then our distribution is normal or approximately normal. And if our sample size is less than 30, we apply the Central Limit Theorem and deem our distribution approximately normal.

z test statistic formula

Z Test Statistic Formula

Standard Deviation Unknown

If the population standard deviation is unknown , we will use a sample standard deviation that will be close enough to the unknown population standard deviation. But this will also cause us to have to use a t-distribution instead of a normal distribution as noted by StatTrek .

Just like we saw with confidence intervals for population means, the t-distribution has an additional parameter representing the degrees of freedom or the number of observations that can be chosen freely.

t test statistic formula

T Test Statistic Formula

This means that our test statistic will be a t-value rather than a z-value. But thankfully, how we find our p-value and draw our final inference is the same as for hypothesis testing for proportions, as the graphic below illustrates.

how to find the p value

How To Find The P Value

Example Question

For example, imagine a company wants to test the claim that their batteries last more than 40 hours. Using a simple random sample of 15 batteries yielded a mean of 44.9 hours, with a standard deviation of 8.9 hours. Test this claim using a significance level of 0.05.

one sample t test example

One Sample T Test Example

How To Find P Value From T

So, our p-value is a probability, and it determines whether our test statistic is as extreme or more extreme then our test value, assuming that the null hypothesis is true. To find this value we either use a calculator or a t-table, as we will demonstrate in the video.

We have significant evidence to conclude the company’s claim that their batteries last more than 40 hours.

what does the p value mean

What Does The P Value Mean?

Together we will work through various examples of how to create a hypothesis test about population means using normal distributions and t-distributions.

One Sample T Test – Lesson & Examples (Video)

  • Introduction to Video: One Sample t-test
  • 00:00:43 – Steps for conducting a hypothesis test for population means (one sample z-test or one sample t-test)
  • Exclusive Content for Members Only
  • 00:03:49 – Conduct a hypothesis test and confidence interval when population standard deviation is known (Example #1)
  • 00:13:49 – Test the null hypothesis when population standard deviation is known (Example #2)
  • 00:18:56 – Use a one-sample t-test to test a claim (Example #3)
  • 00:26:50 – Conduct a hypothesis test and confidence interval when population standard deviation is unknown (Example #4)
  • 00:37:16 – Conduct a hypothesis test by using a one-sample t-test and provide a confidence interval (Example #5)
  • 00:49:19 – Test the hypothesis by first finding the sample mean and standard deviation (Example #6)
  • Practice Problems with Step-by-Step Solutions
  • Chapter Tests with Video Solutions

Get access to all the courses and over 450 HD videos with your subscription

Monthly and Yearly Plans Available

Get My Subscription Now

Still wondering if CalcWorkshop is right for you? Take a Tour and find out how a membership can take the struggle out of learning math.

  • Flashes Safe Seven
  • FlashLine Login
  • Faculty & Staff Phone Directory
  • Emeriti or Retiree
  • All Departments
  • Maps & Directions

Kent State University Home

  • Building Guide
  • Departments
  • Directions & Parking
  • Faculty & Staff
  • Give to University Libraries
  • Library Instructional Spaces
  • Mission & Vision
  • Newsletters
  • Circulation
  • Course Reserves / Core Textbooks
  • Equipment for Checkout
  • Interlibrary Loan
  • Library Instruction
  • Library Tutorials
  • My Library Account
  • Open Access Kent State
  • Research Support Services
  • Statistical Consulting
  • Student Multimedia Studio
  • Citation Tools
  • Databases A-to-Z
  • Databases By Subject
  • Digital Collections
  • Discovery@Kent State
  • Government Information
  • Journal Finder
  • Library Guides
  • Connect from Off-Campus
  • Library Workshops
  • Subject Librarians Directory
  • Suggestions/Feedback
  • Writing Commons
  • Academic Integrity
  • Jobs for Students
  • International Students
  • Meet with a Librarian
  • Study Spaces
  • University Libraries Student Scholarship
  • Affordable Course Materials
  • Copyright Services
  • Selection Manager
  • Suggest a Purchase

Library Locations at the Kent Campus

  • Architecture Library
  • Fashion Library
  • Map Library
  • Performing Arts Library
  • Special Collections and Archives

Regional Campus Libraries

  • East Liverpool
  • College of Podiatric Medicine

null hypothesis one sample t test

  • Kent State University
  • SPSS Tutorials

One Sample t Test

Spss tutorials: one sample t test.

  • The SPSS Environment
  • The Data View Window
  • Using SPSS Syntax
  • Data Creation in SPSS
  • Importing Data into SPSS
  • Variable Types
  • Date-Time Variables in SPSS
  • Defining Variables
  • Creating a Codebook
  • Computing Variables
  • Computing Variables: Mean Centering
  • Computing Variables: Recoding Categorical Variables
  • Computing Variables: Recoding String Variables into Coded Categories (Automatic Recode)
  • rank transform converts a set of data values by ordering them from smallest to largest, and then assigning a rank to each value. In SPSS, the Rank Cases procedure can be used to compute the rank transform of a variable." href="https://libguides.library.kent.edu/SPSS/RankCases" style="" >Computing Variables: Rank Transforms (Rank Cases)
  • Weighting Cases
  • Sorting Data
  • Grouping Data
  • Descriptive Stats for One Numeric Variable (Explore)
  • Descriptive Stats for One Numeric Variable (Frequencies)
  • Descriptive Stats for Many Numeric Variables (Descriptives)
  • Descriptive Stats by Group (Compare Means)
  • Frequency Tables
  • Working with "Check All That Apply" Survey Data (Multiple Response Sets)
  • Chi-Square Test of Independence
  • Pearson Correlation
  • Paired Samples t Test
  • Independent Samples t Test
  • One-Way ANOVA
  • How to Cite the Tutorials

Sample Data Files

Our tutorials reference a dataset called "sample" in many examples. If you'd like to download the sample dataset to work through the examples, choose one of the files below:

  • Data definitions (*.pdf)
  • Data - Comma delimited (*.csv)
  • Data - Tab delimited (*.txt)
  • Data - Excel format (*.xlsx)
  • Data - SAS format (*.sas7bdat)
  • Data - SPSS format (*.sav)

The One Sample t Test examines whether the mean of a population is statistically different from a known or hypothesized value. The One Sample t Test is a parametric test.

This test is also known as:

  • Single Sample t Test

The variable used in this test is known as:

  • Test variable

In a One Sample t Test, the test variable's mean is compared against a "test value", which is a known or hypothesized value of the mean in the population. Test values may come from a literature review, a trusted research organization, legal requirements, or industry standards. For example:

  • A particular factory's machines are supposed to fill bottles with 150 milliliters of product. A plant manager wants to test a random sample of bottles to ensure that the machines are not under- or over-filling the bottles.
  • The United States Environmental Protection Agency (EPA) sets clearance levels for the amount of lead present in homes: no more than 10 micrograms per square foot on floors and no more than 100 micrograms per square foot on window sills ( as of December 2020 ). An inspector wants to test if samples taken from units in an apartment building exceed the clearance level.

Common Uses

The One Sample  t  Test is commonly used to test the following:

  • Statistical difference between a mean and a known or hypothesized value of the mean in the population.
  • This approach involves creating a change score from two variables, and then comparing the mean change score to zero, which will indicate whether any change occurred between the two time points for the original measures. If the mean change score is not significantly different from zero, no significant change occurred.

Note: The One Sample t Test can only compare a single sample mean to a specified constant. It can not compare sample means between two or more groups. If you wish to compare the means of multiple groups to each other, you will likely want to run an Independent Samples t Test (to compare the means of two groups) or a One-Way ANOVA (to compare the means of two or more groups).

Data Requirements

Your data must meet the following requirements:

  • Test variable that is continuous (i.e., interval or ratio level)
  • There is no relationship between scores on the test variable
  • Violation of this assumption will yield an inaccurate p value
  • Random sample of data from the population
  • Non-normal population distributions, especially those that are thick-tailed or heavily skewed, considerably reduce the power of the test
  • Among moderate or large samples, a violation of normality may still yield accurate p values
  • Homogeneity of variances (i.e., variances approximately equal in both the sample and population)
  • No outliers

The null hypothesis ( H 0 ) and (two-tailed) alternative hypothesis ( H 1 ) of the one sample T test can be expressed as:

H 0 : µ =  µ 0   ("the population mean is equal to the [proposed] population mean") H 1 : µ ≠  µ 0   ("the population mean is not equal to the [proposed] population mean")

where µ is the "true" population mean and µ 0 is the proposed value of the population mean.

Test Statistic

The test statistic for a One Sample t Test is denoted t , which is calculated using the following formula:

$$ t = \frac{\overline{x}-\mu{}_{0}}{s_{\overline{x}}} $$

$$ s_{\overline{x}} = \frac{s}{\sqrt{n}} $$

\(\mu_{0}\) = The test value -- the proposed constant for the population mean \(\bar{x}\) = Sample mean \(n\) = Sample size (i.e., number of observations) \(s\) = Sample standard deviation \(s_{\bar{x}}\) = Estimated standard error of the mean ( s /sqrt( n ))

The calculated t value is then compared to the critical t value from the t distribution table with degrees of freedom df = n - 1 and chosen confidence level. If the calculated t value > critical t value, then we reject the null hypothesis.

Data Set-Up

Your data should include one continuous, numeric variable (represented in a column) that will be used in the analysis. The variable's measurement level should be defined as Scale in the Variable View window.

Run a One Sample t Test

To run a One Sample t Test in SPSS, click  Analyze > Compare Means > One-Sample T Test .

The One-Sample T Test window opens where you will specify the variables to be used in the analysis. All of the variables in your dataset appear in the list on the left side. Move variables to the Test Variable(s) area by selecting them in the list and clicking the arrow button.

null hypothesis one sample t test

A Test Variable(s): The variable whose mean will be compared to the hypothesized population mean (i.e., Test Value). You may run multiple One Sample t Tests simultaneously by selecting more than one test variable. Each variable will be compared to the same Test Value. 

B Test Value: The hypothesized population mean against which your test variable(s) will be compared.

C Estimate effect sizes: Optional. If checked, will print effect size statistics -- namely, Cohen's d -- for the test(s). (Note: Effect sizes calculations for t tests were first added to SPSS Statistics in version 27, making them a relatively recent addition. If you do not see this option when you use SPSS, check what version of SPSS you're using.)

D Options: Clicking Options will open a window where you can specify the Confidence Interval Percentage and how the analysis will address Missing Values (i.e., Exclude cases analysis by analysis or Exclude cases listwise ). Click Continue when you are finished making specifications.

null hypothesis one sample t test

Click OK to run the One Sample t Test.

Problem Statement

According to the CDC , the mean height of U.S. adults ages 20 and older is about 66.5 inches (69.3 inches for males, 63.8 inches for females).

In our sample data, we have a sample of 435 college students from a single college. Let's test if the mean height of students at this college is significantly different than 66.5 inches using a one-sample t test. The null and alternative hypotheses of this test will be:

H 0 : µ Height = 66.5  ("the mean height is equal to 66.5") H 1 : µ Height ≠ 66.5  ("the mean height is not equal to 66.5")

Before the Test

In the sample data, we will use the variable Height , which a continuous variable representing each respondent’s height in inches. The heights exhibit a range of values from 55.00 to 88.41 ( Analyze > Descriptive Statistics > Descriptives ).

Let's create a histogram of the data to get an idea of the distribution, and to see if  our hypothesized mean is near our sample mean. Click Graphs > Legacy Dialogs > Histogram . Move variable Height to the Variable box, then click OK .

null hypothesis one sample t test

To add vertical reference lines at the mean (or another location), double-click on the plot to open the Chart Editor, then click Options > X Axis Reference Line . In the Properties window, you can enter a specific location on the x-axis for the vertical line, or you can choose to have the reference line at the mean or median of the sample data (using the sample data). Click Apply to make sure your new line is added to the chart. Here, we have added two reference lines: one at the sample mean (the solid black line), and the other at 66.5 (the dashed red line).

From the histogram, we can see that height is relatively symmetrically distributed about the mean, though there is a slightly longer right tail. The reference lines indicate that sample mean is slightly greater than the hypothesized mean, but not by a huge amount. It's possible that our test result could come back significant.

Running the Test

To run the One Sample t Test, click  Analyze > Compare Means > One-Sample T Test.  Move the variable Height to the Test Variable(s) area. In the Test Value field, enter 66.5.

null hypothesis one sample t test

If you are using SPSS Statistics 27 or later :

If you are using SPSS Statistics 26 or earlier :

Two sections (boxes) appear in the output: One-Sample Statistics and One-Sample Test . The first section, One-Sample Statistics , provides basic information about the selected variable, Height , including the valid (nonmissing) sample size ( n ), mean, standard deviation, and standard error. In this example, the mean height of the sample is 68.03 inches, which is based on 408 nonmissing observations.

null hypothesis one sample t test

The second section, One-Sample Test , displays the results most relevant to the One Sample t Test. 

null hypothesis one sample t test

A Test Value : The number we entered as the test value in the One-Sample T Test window.

B t Statistic : The test statistic of the one-sample t test, denoted t . In this example, t = 5.810. Note that t is calculated by dividing the mean difference (E) by the standard error mean (from the One-Sample Statistics box).

C df : The degrees of freedom for the test. For a one-sample t test, df = n - 1; so here, df = 408 - 1 = 407.

D Significance (One-Sided p and Two-Sided p): The p-values corresponding to one of the possible one-sided alternative hypotheses (in this case, µ Height > 66.5) and two-sided alternative hypothesis (µ Height ≠ 66.5), respectively. In our problem statement above, we were only interested in the two-sided alternative hypothesis.

E Mean Difference : The difference between the "observed" sample mean (from the One Sample Statistics box) and the "expected" mean (the specified test value (A)). The sign of the mean difference corresponds to the sign of the t value (B). The positive t value in this example indicates that the mean height of the sample is greater than the hypothesized value (66.5).

F Confidence Interval for the Difference : The confidence interval for the difference between the specified test value and the sample mean.

Decision and Conclusions

Recall that our hypothesized population value was 66.5 inches, the [approximate] average height of the overall adult population in the U.S. Since p < 0.001, we reject the null hypothesis that the mean height of students at this college is equal to the hypothesized population mean of 66.5 inches and conclude that the mean height is significantly different than 66.5 inches.

Based on the results, we can state the following:

  • There is a significant difference in the mean height of the students at this college and the overall adult population in the U.S. ( p < .001).
  • The average height of students at this college is about 1.5 inches taller than the U.S. adult population average (95% CI [1.013, 2.050]).
  • << Previous: Pearson Correlation
  • Next: Paired Samples t Test >>
  • Last Updated: Jul 10, 2024 11:08 AM
  • URL: https://libguides.library.kent.edu/SPSS

Street Address

Mailing address, quick links.

  • How Are We Doing?
  • Student Jobs

Information

  • Accessibility
  • Emergency Information
  • For Our Alumni
  • For the Media
  • Jobs & Employment
  • Life at KSU
  • Privacy Statement
  • Technology Support
  • Website Feedback

An open portfolio of interoperable, industry leading products

The Dotmatics digital science platform provides the first true end-to-end solution for scientific R&D, combining an enterprise data platform with the most widely used applications for data analysis, biologics, flow cytometry, chemicals innovation, and more.

null hypothesis one sample t test

Statistical analysis and graphing software for scientists

Bioinformatics, cloning, and antibody discovery software

Plan, visualize, & document core molecular biology procedures

Electronic Lab Notebook to organize, search and share data

Proteomics software for analysis of mass spec data

Modern cytometry analysis platform

Analysis, statistics, graphing and reporting of flow cytometry data

Software to optimize designs of clinical trials

POPULAR USE CASES

  • One sample t test

A one sample t test compares the mean with a hypothetical value. In most cases, the hypothetical value comes from theory. For example, if you express your data as 'percent of control', you can test whether the average differs significantly from 100. The hypothetical value can also come from previous data. For example, compare whether the mean systolic blood pressure differs from 135, a value determined in a previous study.

1. Choose data entry format

Caution: Changing format will erase your data.

2. Specify the hypothetical mean value

3. enter data, 4. view the results, learn more about the one sample t test.

In this article you will learn the requirements and assumptions of a one sample t test, how to format and interpret the results of a one sample t test, and when to use different types of t tests.

One sample t test: Overview

The one sample t test, also referred to as a single sample t test, is a statistical hypothesis test used to determine whether the mean calculated from sample data collected from a single group is different from a designated value specified by the researcher. This designated value does not come from the data itself, but is an external value chosen for scientific reasons. Often, this designated value is a mean previously established in a population, a standard value of interest, or a mean concluded from other studies. Like all hypothesis testing, the one sample t test determines if there is enough evidence reject the null hypothesis (H0) in favor of an alternative hypothesis (H1). The null hypothesis for a one sample t test can be stated as: "The population mean equals the specified mean value." The alternative hypothesis for a one sample t test can be stated as: "The population mean is different from the specified mean value."

Single sample t test

The one sample t test differs from most statistical hypothesis tests because it does not compare two separate groups or look at a relationship between two variables. It is a straightforward comparison between data gathered on a single variable from one population and a specified value defined by the researcher. The one sample t test can be used to look for a difference in only one direction from the standard value (a one-tailed t test ) or can be used to look for a difference in either direction from the standard value (a two-tailed t test ).

Requirements and Assumptions for a one sample t test

A one sample t test should be used only when data has been collected on one variable for a single population and there is no comparison being made between groups. For a valid one sample t test analysis, data values must be all of the following:

The one sample t test assumes that all "errors" in the data are independent. The term "error" refers to the difference between each value and the group mean. The results of a t test only make sense when the scatter is random - that whatever factor caused a value to be too high or too low affects only that one value. Prism cannot test this assumption, but there are graphical ways to explore data to verify this assumption is met.

A t test is only appropriate to apply in situations where data represent variables that are continuous measurements. As they rely on the calculation of a mean value, variables that are categorical should not be analyzed using a t test.

The results of a t test should be based on a random sample and only be generalized to the larger population from which samples were drawn.

As with all parametric hypothesis testing, the one sample t test assumes that you have sampled your data from a population that follows a normal (or Gaussian) distribution. While this assumption is not as important with large samples, it is important with small sample sizes, especially less than 10. If your data do not come from a Gaussian distribution , there are three options to accommodate this. One option is to transform the values to make the distribution more Gaussian, perhaps by transforming all values to their reciprocals or logarithms. Another choice is to use the Wilcoxon signed rank nonparametric test instead of the t test. A final option is to use the t test anyway, knowing that the t test is fairly robust to departures from a Gaussian distribution with large samples.

How to format a one sample t test

Ideally, data for a one sample t test should be collected and entered as a single column from which a mean value can be easily calculated. If data is entered on a table with multiple subcolumns, Prism requires one of the following choices to be selected to perform the analysis:

  • Each subcolumn of data can be analyzed separately
  • An average of the values in the columns across each row can be calculated, and the analysis conducted on this new stack of means, or
  • All values in all columns can be treated as one sample of data (paying no attention to which row or column any values are in).

How the one sample t test calculator works

Prism calculates the t ratio by dividing the difference between the actual and hypothetical means by the standard error of the actual mean. The equation is written as follows, where x is the calculated mean, μ is the hypothetical mean (specified value), S is the standard deviation of the sample, and n is the sample size:

t test ratio

A p value is computed based on the calculated t ratio and the numbers of degrees of freedom present (which equals sample size minus 1). The one sample t test calculator assumes it is a two-tailed one sample t test, meaning you are testing for a difference in either direction from the specified value.

How to interpret results of a one sample t test

As discussed, a one sample t test compares the mean of a single column of numbers against a hypothetical mean. This hypothetical mean can be based upon a specific standard or other external prediction. The test produces a P value which requires careful interpretation.

The p value answers this question: If the data were sampled from a Gaussian population with a mean equal to the hypothetical value you entered, what is the chance of randomly selecting N data points and finding a mean as far (or further) from the hypothetical value as observed here?

If the p value is large (usually defined to mean greater than 0.05), the data do not give you any reason to conclude that the population mean differs from the designated value to which it has been compared. This is not the same as saying that the true mean equals the hypothetical value, but rather states that there is no evidence of a difference. Thus, we cannot reject the null hypothesis (H0).

If the p value is small (usually defined to mean less than or equal to 0.05), then it is unlikely that the discrepancy observed between the sample mean and hypothetical mean is due to a coincidence arising from random sampling. There is evidence to reject the idea that the difference is coincidental and conclude instead that the population has a mean that is different from the hypothetical value to which it has been compared. The difference is statistically significant, and the null hypothesis is therefore rejected.

If the null hypothesis is rejected, the question of whether the difference is scientifically important still remains. The confidence interval can be a useful tool in answering this question. Prism reports the 95% confidence interval for the difference between the actual and hypothetical mean. In interpreting these results, one can be 95% sure that this range includes the true difference. It requires scientific judgment to determine if this difference is truly meaningful.

Performing t tests? We can help.

Sign up for more information on how to perform t tests and other common statistical analyses.

When to use different types of t tests

There are three types of t tests which can be used for hypothesis testing:

  • Independent two-sample (or unpaired) t test
  • Paired sample t test

As described, a one sample t test should be used only when data has been collected on one variable for a single population and there is no comparison being made between groups. It only applies when the mean value for data is intended to be compared to a fixed and defined number.

In most cases involving data analysis, however, there are multiple groups of data either representing different populations being compared, or the same population being compared at different times or conditions. For these situations, it is not appropriate to use a one sample t test. Other types of t tests are appropriate for these specific circumstances:

Independent Two-Sample t test (Unpaired t test)

The independent sample t test, also referred to as the unpaired t test, is used to compare the means of two different samples. The independent two-sample t test comes in two different forms:

  • the standard Student's t test, which assumes that the variance of the two groups are equal.
  • the Welch's t test , which is less restrictive compared to the original Student's test. This is the test where you do not assume that the variance is the same in the two groups, which results in fractional degrees of freedom.

The two methods give very similar results when the sample sizes are equal and the variances are similar.

Paired Sample t test

The paired sample t test is used to compare the means of two related groups of samples. Put into other words, it is used in a situation where you have two values (i.e., a pair of values) for the same group of samples. Often these two values are measured from the same samples either at two different times, under two different conditions, or after a specific intervention.

You can perform multiple independent two-sample comparison tests simultaneously in Prism. Select from parametric and nonparametric tests and specify if the data are unpaired or paired. Try performing a t test with a 30-day free trial of Prism .

Watch this video to learn how to choose between a paired and unpaired t test.

Example of how to apply the appropriate t test

"Alkaline" labeled bottled drinking water has become fashionable over the past several years. Imagine we have collected a random sample of 30 bottles of "alkaline" drinking water from a number of different stores to represent the population of "alkaline" bottled water for a particular brand available to the general consumer. The labels on each of the bottles claim that the pH of the "alkaline" water is 8.5. A laboratory then proceeds to measure the exact pH of the water in each bottle.

Table 1: pH of water in random sample of "alkaline bottled water"

If you look at the table above, you see that some bottles have a pH measured to be lower than 8.5, while other bottles have a pH measured to be higher. What can the data tell us about the actual pH levels found in this brand of "alkaline" water bottles marketed to the public as having a pH of 8.5? Statistical hypothesis testing provides a sound method to evaluate this question. Which specific test to use, however, depends on the specific question being asked.

Is a t test appropriate to apply to this data?

Let's start by asking: Is a t test an appropriate method to analyze this set of pH data? The following list reviews the requirements and assumptions for using a t test:

  • Independent sampling : In an independent sample t test, the data values are independent. The pH of one bottle of water does not depend on the pH of any other water bottle. (An example of dependent values would be if you collected water bottles from a single production lot. A sample from a single lot is representative only of that lot, not of alkaline bottled water in general).
  • Continuous variable : The data values are pH levels, which are numerical measurements that are continuous.
  • Random sample : We assume the water bottles are a simple random sample from the population of "alkaline" water bottles produced by this brand as they are a mix of many production lots.
  • Normal distribution : We assume the population from which we collected our samples has pH levels that are normally distributed. To verify this, we should visualize the data graphically. The figure below shows a histogram for the pH measurements of the water bottles. From a quick look at the histogram, we see that there are no unusual points, or outliers. The data look roughly bell-shaped, so our assumption of a normal distribution seems reasonable. The QQ plot can also be used to graphically assess normality and is the preferred choice when the sample size is small.

QQplot ph measurements

Based upon these features and assumptions being met, we can conclude that a t test is an appropriate method to be applied to this set of data.

Which t test is appropriate to use?

The next decision is which t test to apply, and this depends on the exact question we would like our analysis to answer. This example illustrates how each type of t test could be chosen for a specific analysis, and why the one sample t test is the correct choice to determine if the measured pH of the bottled water samples match the advertised pH of 8.5.

We could be interested in determining whether a certain characteristic of a water bottle is associated with having a higher or lower pH, such as whether bottles are glass or plastic. For this questions, we would effectively be dividing the bottles into 2 separate groups and comparing the means of the pH between the 2 groups. For this analysis, we would elect to use a two sample t test because we are comparing the means of two independent groups.

We could also be interested in learning if pH is affected by a water bottle being opened and exposed to the air for a week. In this case, each original sample would be tested for pH level after a week had elapsed and the water had been exposed to the air, creating a second set of sample data. To evaluate whether this exposure affected pH, we would again be comparing two different groups of data, but this time the data are in paired samples each having an original pH measurement and a second measurement from after the week of exposure to the open air. For this analysis, it is appropriate to use a paired t test so that data for each bottle is assembled in rows, and the change in pH is considered bottle by bottle.

Returning to the original question we set out to answer-whether bottled water that is advertised to have a pH of 8.5 actually meets this claim-it is now clear that neither an independent two sample t test or a paired t test would be appropriate. In this case, all 30 pH measurements are sampled from one group representing bottled drinking water labeled "alkaline" available to the general consumer. We wish to compare this measured mean with an expected advertised value of 8.5. This is the exact situation for which one should employ a one sample t test!

From a quick look at the descriptive statistics, we see that the mean of the sample measurements is 8.513, slightly above 8.5. Does this average from our sample of 30 bottles validate the advertised claim of pH 8.5? By applying Prism's one sample t test analysis to this data set, we will get results by which we can evaluate whether the null hypothesis (that there is no difference between the mean pH level in the water bottles and the pH level advertised on the bottles) should be accepted or rejected.

How to Perform a One Sample T Test in Prism

In prior versions of Prism, the one sample t test and the Wilcoxon rank sum tests were computed as part of Prism's Column Statistics analysis. Now, starting with Prism 8, performing one sample t tests is even easier with a separate analysis in Prism.

Steps to perform a one sample t test in Prism

  • Create a Column data table.
  • Enter each data set in a single Y column so all values from each group are stacked into a column. Prism will perform a one sample t test (or Wilcoxon rank sum test) on each column you enter.
  • Click Analyze, look in the list of Column analyses, and choose one sample t test and Wilcoxon test.

It's that simple! Prism streamlines your t test analysis so you can make more accurate and more informed data interpretations. Start your 30-day free trial of Prism and try performing your first one sample t test in Prism.

Watch this video for a step-by-step tutorial on how to perform a t test in Prism.

We Recommend:

Analyze, graph and present your scientific work easily with GraphPad Prism. No coding required.

One Sample T-Test

The one sample t -test is a statistical procedure used to determine whether a sample of observations could have been generated by a process with a specific mean. Suppose you are interested in determining whether an assembly line produces laptop computers that weigh five pounds. To test this hypothesis, you could collect a sample of laptop computers from the assembly line, measure their weights, and compare the sample with a value of five using a one-sample t -test.

There are two kinds of hypotheses for a one sample t -test, the null hypothesis and the alternative hypothesis . The alternative hypothesis assumes that some difference exists between the true mean (μ) and the comparison value (m0), whereas the null hypothesis assumes that no difference exists. The purpose of the one sample t -test is to determine if the null hypothesis should be rejected, given the sample data. The alternative hypothesis can assume one of three forms depending on the question being asked. If the goal is to measure any difference, regardless of direction, a two-tailed hypothesis is used. If the direction of the difference between the sample mean and the comparison value matters, either an upper-tailed or lower-tailed hypothesis is used. The null hypothesis remains the same for each type of one sample t -test. The hypotheses are formally defined below:

• The null hypothesis (\(H_0\)) assumes that the difference between the true mean (\(\mu\)) and the comparison value (\(m_0\)) is equal to zero.

  • • The two-tailed alternative hypothesis (\(H_1\)) assumes that the difference between the true mean (\(\mu\)) and the comparison value (\(m_0\)) is not equal to zero.
  • • The upper-tailed alternative hypothesis (\(H_1\)) assumes that the true mean (\(\mu\)) of the sample is greater than the comparison value (\(m_0\)).
  • • The lower-tailed alternative hypothesis (\(H_1\)) assumes that the true mean (\(\mu\)) of the sample is less than the comparison value (\(m_0\)).

The mathematical representations of the null and alternative hypotheses are defined below:

  • \(H_0:\ \mu\ =\ m_0\)
  • \(H_1:\ \mu\ \ne\ m_0\)    (two-tailed)
  • \(H_1:\ \mu\ >\ m_0\)    (upper-tailed)
  • \(H_1:\ \mu\ <\ m_0\)    (lower-tailed)

Note. It is important to remember that hypotheses are never about data, they are about the processes which produce the data. If you are interested in knowing whether the mean weight of a sample of laptops is equal to five pounds, the real question being asked is whether the process that produced those laptops has a mean of five.

Need help with your analysis?

Schedule a time to speak with an expert using the calendar below.

User-friendly Software

Transform raw data into written, interpreted, APA formatted t-test results in seconds.

Assumptions

As a parametric procedure (a procedure which estimates unknown parameters), the one sample t -test makes several assumptions. Although t -tests are quite robust, it is good practice to evaluate the degree of deviation from these assumptions in order to assess the quality of the results. The one sample t -test has four main assumptions:

  • The dependent variable must be continuous (interval/ratio).
  • The observations are independent of one another.
  • The dependent variable should be approximately normally distributed.
  • The dependent variable should not contain any outliers.

Level of Measurement

The one sample t -test requires the sample data to be numeric and continuous, as it is based on the normal distribution. Continuous data can take on any value within a range (income, height, weight, etc.). The opposite of continuous data is discrete data, which can only take on a few values (Low, Medium, High, etc.). Occasionally, discrete data can be used to approximate a continuous scale, such as with Likert-type scales.

Independence

Independence of observations is usually not testable, but can be reasonably assumed if the data collection process was random without replacement. In our example, we would want to select laptop computers at random, compared to using any systematic pattern. This ensures minimal risk of collecting a biased sample that would yield inaccurate results.

To test the assumption of normality, a variety of methods are available, but the simplest is to inspect the data visually using a histogram or a Q-Q scatterplot. Real-world data are almost never perfectly normal, so this assumption can be considered reasonably met if the shape looks approximately symmetric and bell-shaped. The data in the example figure below is approximately normally distributed.

null hypothesis one sample t test

An outlier is a data value which is too extreme to belong in the distribution of interest. Let’s suppose in our example that the assembly machine ran out of a particular component, resulting in a laptop that was assembled at a much lower weight. This is a condition that is outside of our question of interest, and therefore we can remove that observation prior to conducting the analysis. However, just because a value is extreme does not make it an outlier. Let’s suppose that our laptop assembly machine occasionally produces laptops which weigh significantly more or less than five pounds, our target value. In this case, these extreme values are absolutely essential to the question we are asking and should not be removed. Box-plots are useful for visualizing the variability in a sample, as well as locating any outliers. The boxplot on the left shows a sample with no outliers. The boxplot on the right shows a sample with one outlier.

boxplot of a normally distributed variable

The procedure for a one sample t-test can be summed up in four steps. The symbols to be used are defined below:

  • \(Y\ =\ \)Random sample
  • \(y_i\ =\ \)The \(i^{th}\) observation in \(Y\)
  • \(n\ =\ \)The sample size
  • \(m_0\ =\ \)The hypothesized value
  • \(\overline{y}\ =\ \)The sample mean
  • \(\hat{\sigma}\ =\ \)The sample standard deviation
  • \(T\ =\)The critical value of a t -distribution with (\(n\ -\ 1\)) degrees of freedom
  • \(t\ =\ \)The t -statistic ( t -test statistic) for a one sample t -test
  • \(p\ =\ \)The \(p\)-value (probability value) for the t -statistic.

The four steps are listed below:

  • 1. Calculate the sample mean.
  • \(\overline{y}\ =\ \frac{y_1\ +\ y_2\ +\ \cdots\ +\ y_n}{n}\)
  • 2. Calculate the sample standard deviation.
  • \(\hat{\sigma}\ =\ \sqrt{\frac{(y_1\ -\ \overline{y})^2\ +\ (y_2\ -\ \overline{y})^2\ +\ \cdots\ +\ (y_n\ -\ \overline{y})^2}{n\ -\ 1}}\)
  • 3. Calculate the test statistic.
  • \(t\ =\ \frac{\overline{y}\ -\ m_0}{\hat{\sigma}/\sqrt{n}}\)
  • 4. Calculate the probability of observing the test statistic under the null hypothesis. This value is obtained by comparing t to a t -distribution with (\(n\ -\ 1\)) degrees of freedom. This can be done by looking up the value in a table, such as those found in many statistical textbooks, or with statistical software for more accurate results.
  • \(p\ =\ 2\ \cdot\ Pr(T\ >\ |t|)\)    (two-tailed)
  • \(p\ =\ Pr(T\ >\ t)\)    (upper-tailed)
  • \(p\ =\ Pr(T\ <\ t)\)    (lower-tailed)

Once the assumptions have been verified and the calculations are complete, all that remains is to determine whether the results provide sufficient evidence to reject the null hypothesis in favor of the alternative hypothesis.

Interpretation

There are two types of significance to consider when interpreting the results of a one sample t -test, statistical significance and practical significance.

Statistical Significance

Statistical significance is determined by looking at the p -value. The p -value gives the probability of observing the test results under the null hypothesis. The lower the p -value, the lower the probability of obtaining a result like the one that was observed if the null hypothesis was true. Thus, a low p -value indicates decreased support for the null hypothesis. However, the possibility that the null hypothesis is true and that we simply obtained a very rare result can never be ruled out completely. The cutoff value for determining statistical significance is ultimately decided on by the researcher, but usually a value of .05 or less is chosen. This corresponds to a 5% (or less) chance of obtaining a result like the one that was observed if the null hypothesis was true.

Practical Significance

Practical significance depends on the subject matter. In general, a result is practically significant if the size of the effect is large (or small) enough to be relevant to the research questions being investigated.  It is not uncommon, especially with large sample sizes, to observe a result that is statistically significant but not practically significant.  Returning to the example of laptop weights, an average difference of .002 pounds might be statistically significant.  However, a difference this small is unlikely to be of any interest.  In most cases, both practical and statistical significance are required to draw meaningful conclusions.

Statistics Solutions can assist with your quantitative analysis by assisting you to develop your methodology and results chapters. The services that we offer include:

Data Analysis Plan

  • Edit your research questions and null/alternative hypotheses
  • Write your data analysis plan; specify specific statistics to address the research questions, the assumptions of the statistics, and justify why they are the appropriate statistics; provide references
  • Justify your sample size/power analysis, provide references
  • Explain your data analysis plan to you so you are comfortable and confident
  • Two hours of additional support with your statistician

Quantitative Results Section (Descriptive Statistics, Bivariate and Multivariate Analyses, Structural Equation Modeling , Path analysis, HLM, Cluster Analysis )

  • Clean and code dataset
  • Conduct descriptive statistics (i.e., mean, standard deviation, frequency and percent, as appropriate)
  • Conduct analyses to examine each of your research questions
  • Write-up results
  • Provide APA 7 th edition tables and figures
  • Explain Chapter 4 findings
  • Ongoing support for entire results chapter statistics

Please call 727-442-4290 to request a quote based on the specifics of your research, schedule using the calendar on this page, or email [email protected]

SPSS tutorials website header logo

One-Sample T-Test – Quick Tutorial & Example

Null hypothesis, assumptions, effect size, confidence intervals for means, apa style reporting.

A one-sample t-test evaluates if a population mean is likely to be x : some hypothesized value.

One Sample T-Test Diagram

One-Sample T-Test Example

A school director thinks his students perform poorly due to low IQ scores. Now, most IQ tests have been calibrated to have a mean of 100 points in the general population. So the question is does the student population have a mean IQ score of 100? Now, our school has 1,114 students and the IQ tests are somewhat costly to administer. Our director therefore draws a simple random sample of N = 38 students and tests them on 4 IQ components:

  • verb (Verbal Intelligence )
  • math (Mathematical Ability )
  • clas (Classification Skills )
  • logi (Logical Reasoning Skills)

The raw data thus collected are in this Googlesheet , partly shown below. Note that a couple of scores are missing due to illness and unknown reasons.

One Sample T-Test Example Data

We'll try to demonstrate that our students have low IQ scores by rejecting the null hypothesis that the mean IQ score for the entire student population is 100 for each of the 4 IQ components measured. Our main challenge is that we only have data on a sample of 38 students from a population of N = 1,114. But let's first just look at some descriptive statistics for each component:

  • N - sample size;
  • M - sample mean and
  • SD - sample standard deviation.

Descriptive Statistics

Descriptive Statistics for One-Sample T-Test

Our first basic conclusion is that our 38 students score lower than 100 points on all 4 IQ components. The differences for verb (99.29) and math (97.97) are small. Those for clas (93.91) and logi (94.74) seem somewhat more serious. Now, our sample of 38 students may obviously come up with slightly different means than our population of N = 1,114. So what can we (not) conclude regarding our population? We'll try to generalize these sample results to our population with 2 different approaches:

  • Statistical significance : how likely are these sample means if the population means are really all 100 points?
  • Confidence intervals : given the sample results, what are likely ranges for the population means?

Both approaches require some assumptions so let's first look into those.

The assumptions required for our one-sample t-tests are

  • independent observations and
  • normality : the IQ scores must be normally distributed in the entire population.

Do our data meet these assumptions? First off, 1. our students didn't interact during their tests. Therefore, our observations are likely to be independent. 2. Normality is only needed for small sample sizes, say N < 25 or so. For the data at hand, normality is no issue. For smaller sample sizes, you could evaluate the normality assumption by

  • inspecting if the histograms roughly follow normal curves,
  • inspecting if both skewness and kurtosis are close to 0 and
  • running a Shapiro-Wilk test or a Kolmogorov-Smirnov test .

However, the data at hand meet all assumptions so let's now look into the actual tests.

If we'd draw many samples of students, such samples would come up with different means. We can compute the standard deviation of those means over hypothesized samples: the standard error of the mean or \(SE_{mean}\) $$SE_{mean} = \frac{SD}{\sqrt{N}}$$ for our first IQ component, this results in $$SE_{mean} = \frac{12.45}{\sqrt{38}} = 2.02$$ Our null hypothesis is that the population mean, \(\mu_0 = 100\). If this is true, then the average sample mean should also be 100. We now basically compute the z-score for our sample mean: the test statistic \(t\) $$t = \frac{M - \mu_0}{SE_{mean}}$$ for our first IQ component, this results in $$t = \frac{99.29 - 100}{2.02} = -0.35$$ If the assumptions are met, \(t\) follows a t distribution with the degrees of freedom or \(df\) given by $$df = N - 1$$ For a sample of 38 respondents, this results in $$df = 38 - 1 = 37$$ Given \(t\) and \(df\), we can simply look up that the 2-tailed significance level \(p\) = 0.73 in this Googlesheet , partly shown below.

One Sample T-Test In Googlesheets

Interpretation

As a rule of thumb, we reject the null hypothesis if p < 0.05. We just found that p = 0.73 so we don't reject our null hypothesis: given our sample data, the population mean being 100 is a credible statement. So precisely what does p = 0.73 mean? Well, it means there's a 0.73 (or 73%) probability that t < -0.35 or t > 0.35. The figure below illustrates how this probability results from the sampling distribution , t(37).

2-Tailed Significance In T-Distribution

Next, remember that t is just a standardized mean difference. For our data, t = -0.35 corresponds to a difference of -0.71 IQ points. Therefore, p = 0.73 means that there's a 0.73 probability of finding an absolute mean difference of at least 0.71 points. Roughly speaking, the sample mean we found is likely to occur if the null hypothesis is true.

The only effect size measure for a one-sample t-test is Cohen’s D defined as $$Cohen's\;D = \frac{M - \mu_0}{SD}$$ For our first IQ test component, this results in $$Cohen's\;D = \frac{99.29 - 100}{12.45} = -0.06$$ Some general conventions are that

  • | Cohen’s D | = 0.20 indicates a small effect size;
  • | Cohen’s D | = 0.50 indicates a medium effect size;
  • | Cohen’s D | = 0.80 indicates a large effect size.

This means that Cohen’s D = -0.06 indicates a negligible effect size for our first test component. Cohen’s D is completely absent from SPSS except for SPSS 27 . However, we can easily obtain it from JASP . The JASP output below shows the effect sizes for all 4 IQ test components.

One Sample T-Test Jasp Output

Note that the last 2 IQ components -clas and logi- almost have medium effect sizes. These are also the 2 components whose means differ significantly from 100: p < 0.05 for both means (third table column).

Our data came up with sample means for our 4 IQ test components. Now, we know that sample means typically differ somewhat from their population counterparts. So what are likely ranges for the population means we're after? This is often answered by computing 95% confidence intervals . We'll demonstrate the procedure for our last IQ component, logical reasoning. Since we've 34 observations, t follows a t-distribution with df = 33. We'll first look up which t-values enclose the most likely 95% from the inverse t-distribution. We'll do so by typing =T.INV(0.025,33) into any cell of a Googlesheet , which returns -2.03. Note that 0.025 is 2.5%. This is because the 5% most un likely values are divided over both tails of the distribution as shown below.

Finding Critical Values for Confidence Intervals from an Inverse T-Distribution in Googlesheets

Now, our t-value of -2.03 estimates that our 95% of our sample means fluctuate between ± 2.03 standard errors denoted by \(SE_{mean}\) For our last IQ component, $$SE_{mean} = \frac{12.57}{\sqrt34} = 2.16 $$ We now know that 95% of our sample means are estimated to fluctuate between ± 2.03 · 2.16 = 4.39 IQ test points. Last, we combine this fluctuation with our observed sample mean of 94.74: $$CI_{95\%} = [94.74 - 4.39,94.74 + 4.39] = [90.35,99.12]$$ Note that our 95% confidence interval does not enclose our hypothesized population mean of 100. This implies that we'll reject this null hypothesis at α = 0.05. We don't even need to run the actual t-test for drawing this conclusion.

A single t-test is usually reported in text as in “The mean for verbal skills did not differ from 100, t(37) = -0.35, p = 0.73, Cohen’s D = 0.06.” For multiple tests, a simple overview table as shown below is recommended. We feel that confidence intervals for means (not mean differences ) should also be included. Since the APA does not mention these, we left them out for now.

APA Style Reporting Table for One-Sample T-Test

Right. Well, I can't think of anything else that is relevant regarding the one-sample t-test. If you do, don't be shy. Just write us a comment below. We're always happy to hear from you!

Thanks for reading!

Tell us what you think!

This tutorial has 3 comments:.

null hypothesis one sample t test

By YY Ma on February 23rd, 2021

An excellent introduction! Cohen's D is a useful statistic. I think, if the sample size of each study is identical, | t | can be used as the effect size. And | t (0.05,df) | is the threshold for assessing whether a effect size is significantly large.

null hypothesis one sample t test

By SHAMSUDDEEN IDRIS RIMINGADO on January 9th, 2022

In accordance with your explanation, does a one sample t test be use to test this hypothesis : There is significant difference between male and female exposed to error analysis in student with handwriting difficulties

null hypothesis one sample t test

By Ruben Geert van den Berg on January 10th, 2022

For your question, you'd typically use an independent samples t-test , which is a bit more complicated than the one-sample t-test discussed in this tutorial.

Hope that helps!

SPSS tutorials

One Sample T Test: SPSS, By Hand, Step by Step

  • What is the One Sample T Test?
  • Example (By Hand)

What is a One Sample T Test?

The one sample t test compares the mean of your sample data to a known value. For example, you might want to know how your sample mean compares to the population mean . You should run a one sample t test when you don’t know the population standard deviation or you have a small sample size . For a full rundown on which test to use, see: T-score vs. Z-Score .

Assumptions of the test (your data should meet these requirements for the test to be valid):

  • Data is independent .
  • Data is collected randomly. For example, with simple random sampling .
  • The data is approximately normally distributed .

One Sample T Test Example

Watch the video below for an example or keep reading:

null hypothesis one sample t test

Can’t see the video? Click here to watch it on YouTube.

Example question : your company wants to improve sales. Past sales data indicate that the average sale was $100 per transaction. After training your sales force, recent sales data (taken from a sample of 25 salesmen) indicates an average sale of $130, with a standard deviation of $15. Did the training work? Test your hypothesis at a 5% alpha level .

Step 1: Write your null hypothesis statement ( How to state a null hypothesis ). The accepted hypothesis is that there is no difference in sales, so: H 0 : μ = $100.

Step 2: Write your alternate hypothesis . This is the one you’re testing in the one sample t test. You think that there is a difference (that the mean sales increased), so: H 1 : μ > $100.

Step 3: Identify the following pieces of information you’ll need to calculate the test statistic. The question should give you these items:

  • The sample mean (x̄). This is given in the question as $130.
  • The population mean (μ). Given as $100 (from past data).
  • The sample standard deviation (s) = $15.
  • Number of observations (n) = 25.

one sample t test

Step 5: Find the t-table value. You need two values to find this:

  • The alpha level: given as 5% in the question.
  • The degrees of freedom , which is the number of items in the sample (n) minus 1: 25 – 1 = 24.

Look up 24 degrees of freedom in the left column and 0.05 in the top row. The intersection is 1.711. This is your one-tailed critical t-value.

What this critical value means in a one tailed t test, is that we would expect most values to fall under 1.711. If our calculated t-value (from Step 4) falls within this range, the null hypothesis is likely true.

Step 5: Compare Step 4 to Step 5. The value from Step 4 does not fall into the range calculated in Step 5, so we can reject the null hypothesis . The value of 10 falls into the rejection region (the left tail).

In other words, it’s highly likely that the mean sale is greater. The one sample t test has told us that sales training was probably a success.

Want to check your work? Take a look at Daniel Soper’s calculator . Just plug in your data to get the t-statistic and critical values.

Beyer, W. H. CRC Standard Mathematical Tables, 31st ed. Boca Raton, FL: CRC Press, pp. 536 and 571, 2002. Agresti A. (1990) Categorical Data Analysis. John Wiley and Sons, New York. Friedman (2015). Fundamentals of Clinical Trials 5th ed. Springer.” Salkind, N. (2016). Statistics for People Who (Think They) Hate Statistics: Using Microsoft Excel 4th Edition.

One Sample t-Test (Jump to: Lecture | Video )

Let's perform a one sample t-test: In the population, the average IQ is 100. A team of scientists wants to test a new medication to see if it has either a positive or negative effect on intelligence, or no effect at all. A sample of 30 participants who have taken the medication has a mean of 140 with a standard deviation of 20. Did the medication affect intelligence? Use alpha = 0.05.

Steps for One-Sample t-Test

1. Define Null and Alternative Hypotheses

2. State Alpha

3. Calculate Degrees of Freedom

4. State Decision Rule

5. Calculate Test Statistic

6. State Results

7. State Conclusion

Let's begin.

1. Define Null and Alternative Hypotheses

Figure 1.

2. State Alpha

Alpha = 0.05

3. Calculate Degrees of Freedom

df = n - 1 = 30 - 1 = 29

4. State Decision Rule

Using an alpha of 0.05 with a two-tailed test with 29 degrees of freedom, we would expect our distribution to look something like this:

Figure 2.

Use the t-table to look up a two-tailed test with 29 degrees of freedom and an alpha of 0.05. We find a critical value of 2.0452. Thus, our decision rule for this two-tailed test is:

If t is less than -2.0452, or greater than 2.0452, reject the null hypothesis.

5. Calculate Test Statistic

Figure 3.

6. State Results

Result: Reject the null hypothesis.

7. State Conclusion

Medication significantly affected intelligence, t = 10.96, p < 0.05.

Back to Top

One-Sample T-Test using SPSS Statistics

Introduction.

The one-sample t-test is used to determine whether a sample comes from a population with a specific mean. This population mean is not always known, but is sometimes hypothesized. For example, you want to show that a new teaching method for pupils struggling to learn English grammar can improve their grammar skills to the national average. Your sample would be pupils who received the new teaching method and your population mean would be the national average score. Alternately, you believe that doctors that work in Accident and Emergency (A & E) departments work 100 hour per week despite the dangers (e.g., tiredness) of working such long hours. You sample 1000 doctors in A & E departments and see if their hours differ from 100 hours.

This "quick start" guide shows you how to carry out a one-sample t-test using SPSS Statistics, as well as interpret and report the results from this test. However, before we introduce you to this procedure, you need to understand the different assumptions that your data must meet in order for a one-sample t-test to give you a valid result. We discuss these assumptions next.

SPSS Statistics

Assumptions.

When you choose to analyse your data using a one-sample t-test, part of the process involves checking to make sure that the data you want to analyse can actually be analysed using a one-sample t-test. You need to do this because it is only appropriate to use a one-sample t-test if your data "passes" four assumptions that are required for a one-sample t-test to give you a valid result. In practice, checking for these four assumptions just adds a little bit more time to your analysis, requiring you to click a few more buttons in SPSS Statistics when performing your analysis, as well as think a little bit more about your data, but it is not a difficult task.

Before we introduce you to these four assumptions, do not be surprised if, when analysing your own data using SPSS Statistics, one or more of these assumptions is violated (i.e., is not met). This is not uncommon when working with real-world data rather than textbook examples, which often only show you how to carry out a one-sample t-test when everything goes well! However, don’t worry. Even when your data fails certain assumptions, there is often a solution to overcome this. First, let’s take a look at these four assumptions:

  • Assumption #1: Your dependent variable should be measured at the interval or ratio level (i.e., continuous ). Examples of variables that meet this criterion include revision time (measured in hours), intelligence (measured using IQ score), exam performance (measured from 0 to 100), weight (measured in kg), and so forth. You can learn more about interval and ratio variables in our article: Types of Variable .
  • Assumption #2: The data are independent (i.e., not correlated/related ), which means that there is no relationship between the observations. This is more of a study design issue than something you can test for, but it is an important assumption of the one-sample t-test.
  • Assumption #3: There should be no significant outliers . Outliers are data points within your data that do not follow the usual pattern (e.g., in a study of 100 students' IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is very unusual, and may even put her in the top 1% of IQ scores globally). The problem with outliers is that they can have a negative effect on the one-sample t-test, reducing the accuracy of your results. Fortunately, when using SPSS Statistics to run a one-sample t-test on your data, you can easily detect possible outliers. In our enhanced one-sample t-test guide, we: (a) show you how to detect outliers using SPSS Statistics; and (b) discuss some of the options you have in order to deal with outliers.
  • Assumption #4: Your dependent variable should be approximately normally distributed . We talk about the one-sample t-test only requiring approximately normal data because it is quite "robust" to violations of normality, meaning that the assumption can be a little violated and still provide valid results. You can test for normality using the Shapiro-Wilk test of normality, which is easily tested for using SPSS Statistics. In addition to showing you how to do this in our enhanced one-sample t-test guide, we also explain what you can do if your data fails this assumption (i.e., if it fails it more than a little bit).

You can check assumptions #3 and #4 using SPSS Statistics. Before doing this, you should make sure that your data meets assumptions #1 and #2, although you don't need SPSS Statistics to do this. When moving on to assumptions #3 and #4, we suggest testing them in this order because it represents an order where, if a violation to the assumption is not correctable, you will no longer be able to use a one-sample t-test. Just remember that if you do not run the statistical tests on these assumptions correctly, the results you get when running a one-sample t-test might not be valid. This is why we dedicate a number of sections of our enhanced one-sample t-test guide to help you get this right. You can find out about our enhanced content on our Features: Overview page.

In the section, Procedure , we illustrate the SPSS Statistics procedure required to perform a one-sample t-test assuming that no assumptions have been violated. First, we set out the example we use to explain the one-sample t-test procedure in SPSS Statistics.

Testimonials

Example and Setup in SPSS Statistics

A researcher is planning a psychological intervention study, but before he proceeds he wants to characterise his participants' depression levels. He tests each participant on a particular depression index, where anyone who achieves a score of 4.0 is deemed to have 'normal' levels of depression. Lower scores indicate less depression and higher scores indicate greater depression. He has recruited 40 participants to take part in the study. Depression scores are recorded in the variable dep_score . He wants to know whether his sample is representative of the normal population (i.e., do they score statistically significantly differently from 4.0).

For a one-sample t-test, there will only be one variable's data to be entered into SPSS Statistics: the dependent variable, dep_score , which is the depression score.

Test Procedure in SPSS Statistics

The 5-step Compare Means > One-Sample T Test... procedure below shows you how to analyse your data using a one-sample t-test in SPSS Statistics when the four assumptions in the previous section, Assumptions , have not been violated. At the end of these five steps, we show you how to interpret the results from this test. If you are looking for help to make sure your data meets assumptions #3 and #4, which are required when using a one-sample t-test, and can be tested using SPSS Statistics, you can learn more in our enhanced guides on our Features: Overview page.

Since some of the options in the Compare Means > One-Sample T Test... procedure changed in SPSS Statistics version 27 , we show how to carry out a one-sample t-test depending on whether you have SPSS Statistics versions 27 or 28 (or the subscription version of SPSS Statistics) or version 26 or an earlier version of SPSS Statistics . The latest versions of SPSS Statistics are version 28 and the subscription version . If you are unsure which version of SPSS Statistics you are using, see our guide: Identifying your version of SPSS Statistics .

SPSS Statistics versions 27 and 28 and the subscription version of SPSS Statistics

Shows the SPSS Statistics menu for the one-sample t-test

Published with written permission from SPSS Statistics, IBM Corporation.

'One-Sample T Test' dialogue box with the dependent variable, 'dep_score', in the box on the left

Note 1: By default, SPSS Statistics uses 95% confidence intervals (labelled as the C onfidence Interval Percentage in SPSS Statistics). This equates to declaring statistical significance at the p < .05 level. If you wish to change this you can enter any value from 1 to 99. For example, entering "99" into this box would result in a 99% confidence interval and equate to declaring statistical significance at the p < .01 level. For this example, keep the default 95% confidence intervals.

Note 2: If you are testing more than one dependent variable and you have any missing values in your data, you need to think carefully about whether to select Exclude c a ses analysis by analysis or Exc l ude cases listwise ) in the –Missing Values– area. Selecting the incorrect option could mean that SPSS Statistics removes data from your analysis that you wanted to include. We discuss this further and what options to select in our enhanced one-sample t-test guide.

Continue

Now that you have run the Compare Means > One-Sample T Test... procedure to carry out a one-sample t-test, go to the Interpreting Results section. You can ignore the section below, which shows you how to carry out a one-sample t-test if you have SPSS Statistics version 26 or an earlier version of SPSS Statistics.

SPSS Statistics version 26 and earlier versions of SPSS Statistics

Shows the SPSS Statistics menu for the one-sample t-test

Interpreting the SPSS Statistics output of the one-sample t-test

SPSS Statistics generates two main tables of output for the one-sample t-test that contains all the information you require to interpret the results of a one-sample t-test.

If your data passed assumption #3 (i.e., there were no significant outliers) and assumption #4 (i.e., your dependent variable was approximately normally distributed for each category of the independent variable), which we explained earlier in the Assumptions section, you will only need to interpret these two main tables. However, since you should have tested your data for these assumptions, you will also need to interpret the SPSS Statistics output that was produced when you tested for them (i.e., you will have to interpret: (a) the boxplots you used to check if there were any significant outliers; and (b) the output SPSS Statistics produces for your Shapiro-Wilk test of normality to determine normality). If you do not know how to do this, we show you in our enhanced one-sample t-test guide. Remember that if your data failed any of these assumptions, the output that you get from the one-sample t-test procedure (i.e., the tables we discuss below), will no longer be relevant, and you will need to interpret these tables differently.

However, in this "quick start" guide, we take you through each of the two main tables in turn, assuming that your data met all the relevant assumptions:

Descriptive statistics

You can make an initial interpretation of the data using the One-Sample Statistics table, which presents relevant descriptive statistics:

'One-Sample Statistics' table with columns 'N', 'Mean', 'Std. Deviation' & 'Std. Error Mean' shown for the dependent variable

It is more common than not to present your descriptive statistics using the mean and standard deviation (" Std. Deviation " column) rather than the standard error of the mean (" Std. Error Mean " column), although both are acceptable. You could report the results, using the standard deviation, as follows:

Mean depression score (3.72 ± 0.74) was lower than the population 'normal' depression score of 4.0.

Mean depression score ( M = 3.72, SD = 0.74) was lower than the population 'normal' depression score of 4.0.

However, by running a one-sample t-test, you are really interested in knowing whether the sample you have ( dep_score ) comes from a 'normal' population (which has a mean of 4.0). This is discussed in the next section.

One-sample t-test

The One-Sample Test table reports the result of the one-sample t-test. The top row provides the value of the known or hypothesized population mean you are comparing your sample data to, as highlighted below:

'Test Value' of 4 is highlighted in the 'One-Sample Test' table in SPSS Statistics

In this example, you can see the 'normal' depression score value of "4" that you entered in earlier. You now need to consult the first three columns of the One-Sample Test table, which provides information on whether the sample is from a population with a mean of 4 (i.e., are the means statistically significantly different), as highlighted below:

't', 'df' & 'Sig. (2-tailed)' values for the dependent variable, 'dep_score', are highlighted in the 'One-Sample Test' table

Moving from left-to-right, you are presented with the observed t -value (" t " column), the degrees of freedom (" df "), and the statistical significance ( p -value) (" Sig. (2-tailed) ") of the one-sample t-test. In this example, p < .05 (it is p = .022). Therefore, it can be concluded that the population means are statistically significantly different. If p > .05, the difference between the sample-estimated population mean and the comparison population mean would not be statistically significantly different.

Note: If you see SPSS Statistics state that the " Sig. (2-tailed) " value is ".000", this actually means that p < .0005. It does not mean that the significance level is actually zero.

SPSS Statistics also reports that t = -2.381 (" t " column) and that there are 39 degrees of freedom (" df " column). You need to know these values in order to report your results, which you could do as follows:

Depression score was statistically significantly lower than the population normal depression score, t (39) = -2.381, p = .022.

The breakdown of the last part (i.e., t (39) = -2.381, p = .022) is as follows:

In following table, 1 = 't', 2 = '39', 3 = '-2.381', and 4 = 'p = .022'

  Part Meaning
1 Indicates that we are comparing to a -distribution ( -test).
2(39) Indicates the degrees of freedom, which is - 1
3-2.381 Indicates the obtained value of the -statistic (obtained -value)
4 = .022 Indicates the probability of obtaining the observed -value if the null hypothesis is correct.
Table 4.1: Breakdown of a one-sample t-test statistical statement.

You can also include measures of the difference between the two population means in your written report. This information is included in the columns on the far-right of the One-Sample Test table, as highlighted below:

'Mean Difference' & '95% Confidence Interval of the difference' values highlighted for the dependent variable, 'dep_score'

This section of the table shows that the mean difference in the population means is -0.28 (" Mean Difference " column) and the 95% confidence intervals (95% CI) of the difference are -0.51 to -0.04 (" Lower " to " Upper " columns). For the measures used, it will be sufficient to report the values to 2 decimal places. You could write these results as:

Depression score was statistically significantly lower by 0.28 (95% CI, 0.04 to 0.51) than a normal depression score of 4.0, t (39) = -2.381, p = .022.

Depression score was statistically significantly lower by a mean of 0.28, 95% CI [0.04 to 0.51], than a normal depression score of 4.0, t (39) = -2.381, p = .022.

Standardised effect sizes

After reporting the unstandardised effect size, we might also report a standardised effect size such as Cohen's d (Cohen, 1988) or Hedges' g (Hedges, 1981). In our example, this may be useful for future studies where researchers want to compare the "size" of the effect in their studies to the size of the effect in this study.

There are many different types of standardised effect size, with different types often trying to "capture" the importance of your results in different ways. In SPSS Statistics versions 18 to 26 , SPSS Statistics did not automatically produce a standardised effect size as part of a one-sample t-test analysis. However, it is easy to calculate a standardised effect size such as Cohen's d (Cohen, 1988) using the results from the one-sample t-test analysis. In SPSS Statistics versions 27 and 28 (and the subscription version of SPSS Statistics), two standardised effect sizes are automatically produced: Cohen's d and Hedges' g , as shown in the One-Sample Effect Sizes table below:

'Cohen's d' & 'Hedges' g'. One-Sample Effect Sizes table. One-sample t-test in SPSS

Reporting the SPSS Statistics output of the one-sample t-test

You can report the findings, without the tests of assumptions, as follows:

Mean depression score (3.73 ± 0.74) was lower than the normal depression score of 4.0, a statistically significant difference of 0.28 (95% CI, 0.04 to 0.51), t (39) = -2.381, p = .022.

Mean depression score ( M = 3.73, SD = 0.74) was lower than the normal depression score of 4.0, a statistically significant mean difference of 0.28, 95% CI [0.04 to 0.51], t (39) = -2.381, p = .022.

Adding in the information about the statistical test you ran, including the assumptions, you have:

A one-sample t-test was run to determine whether depression score in recruited subjects was different to normal, defined as a depression score of 4.0. Depression scores were normally distributed, as assessed by Shapiro-Wilk's test ( p > .05) and there were no outliers in the data, as assessed by inspection of a boxplot. Mean depression score (3.73 ± 0.74) was lower than the normal depression score of 4.0, a statistically significant difference of 0.28 (95% CI, 0.04 to 0.51), t (39) = -2.381, p = .022.

A one-sample t-test was run to determine whether depression score in recruited subjects was different to normal, defined as a depression score of 4.0. Depression scores were normally distributed, as assessed by Shapiro-Wilk's test ( p > .05) and there were no outliers in the data, as assessed by inspection of a boxplot. Mean depression score ( M = 3.73, SD = 0.74) was lower than the normal depression score of 4.0, a statistically significant mean difference of 0.28, 95% CI [0.04 to 0.51], t (39) = -2.381, p = .022.

Null hypothesis significance testing

You can write the result in respect of your null and alternative hypothesis as:

There was a statistically significant difference between means ( p < .05). Therefore, we can reject the null hypothesis and accept the alternative hypothesis.

Practical vs. statistical significance

Although a statistically significant difference was found between the depression scores in the recruited subjects vs. the normal depression score, it does not necessarily mean that the difference encountered, 0.28 (95% CI, 0.04 to 0.51), is enough to be practically significant. Indeed, the researcher might accept that although the difference is statistically significant (and would report this), the difference is not large enough to be practically significant (i.e., the subjects can be treated as normal).

In our enhanced one-sample t-test guide, we show you how to write up the results from your assumptions tests and one-sample t-test procedure if you need to report this in a dissertation/thesis, assignment or research report. We do this using the Harvard and APA styles. We also explain how to interpret the results from the One-Sample Effect Sizes table, which include the two standardised effect sizes: Cohen's d and Hedges' g . You can learn more about our enhanced content in our Features: Overview section.

Icon Partners

  • Quality Improvement
  • Talk To Minitab

Understanding t-Tests: 1-sample, 2-sample, and Paired t-Tests

Topics: Hypothesis Testing , Data Analysis

In statistics, t-tests are a type of hypothesis test that allows you to compare means. They are called t-tests because each t-test boils your sample data down to one number, the t-value. If you understand how t-tests calculate t-values, you’re well on your way to understanding how these tests work.

In this series of posts, I'm focusing on concepts rather than equations to show how t-tests work. However, this post includes two simple equations that I’ll work through using the analogy of a signal-to-noise ratio.

Minitab Statistical Software offers the 1-sample t-test, paired t-test, and the 2-sample t-test. Let's look at how each of these t-tests reduce your sample data down to the t-value.

How 1-Sample t-Tests Calculate t-Values

Understanding this process is crucial to understanding how t-tests work. I'll show you the formula first, and then I’ll explain how it works.

formula to calculate t for a 1-sample t-test

Please notice that the formula is a ratio. A common analogy is that the t-value is the signal-to-noise ratio.

Signal (a.k.a. the effect size)

The numerator is the signal. You simply take the sample mean and subtract the null hypothesis value. If your sample mean is 10 and the null hypothesis is 6, the difference, or signal, is 4.

If there is no difference between the sample mean and null value, the signal in the numerator, as well as the value of the entire ratio, equals zero. For instance, if your sample mean is 6 and the null value is 6, the difference is zero.

As the difference between the sample mean and the null hypothesis mean increases in either the positive or negative direction, the strength of the signal increases.

Photo of a packed stadium to illustrate high background noise

The denominator is the noise. The equation in the denominator is a measure of variability known as the standard error of the mean . This statistic indicates how accurately your sample estimates the mean of the population. A larger number indicates that your sample estimate is less precise because it has more random error.

This random error is the “noise.” When there is more noise, you expect to see larger differences between the sample mean and the null hypothesis value even when the null hypothesis is true . We include the noise factor in the denominator because we must determine whether the signal is large enough to stand out from it.

Signal-to-Noise ratio

Both the signal and noise values are in the units of your data. If your signal is 6 and the noise is 2, your t-value is 3. This t-value indicates that the difference is 3 times the size of the standard error. However, if there is a difference of the same size but your data have more variability (6), your t-value is only 1. The signal is at the same scale as the noise.

In this manner, t-values allow you to see how distinguishable your signal is from the noise. Relatively large signals and low levels of noise produce larger t-values. If the signal does not stand out from the noise, it’s likely that the observed difference between the sample estimate and the null hypothesis value is due to random error in the sample rather than a true difference at the population level.

A Paired t-test Is Just A 1-Sample t-Test

Many people are confused about when to use a paired t-test and how it works. I’ll let you in on a little secret. The paired t-test and the 1-sample t-test are actually the same test in disguise! As we saw above, a 1-sample t-test compares one sample mean to a null hypothesis value. A paired t-test simply calculates the difference between paired observations (e.g., before and after) and then performs a 1-sample t-test on the differences.

You can test this with this data set to see how all of the results are identical, including the mean difference, t-value, p-value, and confidence interval of the difference.

Minitab worksheet with paired t-test example

Understanding that the paired t-test simply performs a 1-sample t-test on the paired differences can really help you understand how the paired t-test works and when to use it. You just need to figure out whether it makes sense to calculate the difference between each pair of observations.

For example, let’s assume that “before” and “after” represent test scores, and there was an intervention in between them. If the before and after scores in each row of the example worksheet represent the same subject, it makes sense to calculate the difference between the scores in this fashion—the paired t-test is appropriate. However, if the scores in each row are for different subjects, it doesn’t make sense to calculate the difference. In this case, you’d need to use another test, such as the 2-sample t-test, which I discuss below.

Using the paired t-test simply saves you the step of having to calculate the differences before performing the t-test. You just need to be sure that the paired differences make sense!

When it is appropriate to use a paired t-test, it can be more powerful than a 2-sample t-test. For more information, go to Overview for paired t .

How Two-Sample T-tests Calculate T-Values

The 2-sample t-test takes your sample data from two groups and boils it down to the t-value. The process is very similar to the 1-sample t-test, and you can still use the analogy of the signal-to-noise ratio. Unlike the paired t-test, the 2-sample t-test requires independent groups for each sample.

The formula is below, and then some discussion.

formula to cacalculate t for a 2-sample t-test

For the 2-sample t-test, the numerator is again the signal, which is the difference between the means of the two samples. For example, if the mean of group 1 is 10, and the mean of group 2 is 4, the difference is 6.

The default null hypothesis for a 2-sample t-test is that the two groups are equal. You can see in the equation that when the two groups are equal, the difference (and the entire ratio) also equals zero. As the difference between the two groups grows in either a positive or negative direction, the signal becomes stronger.

In a 2-sample t-test, the denominator is still the noise, but Minitab can use two different values. You can either assume that the variability in both groups is equal or not equal, and Minitab uses the corresponding estimate of the variability. Either way, the principle remains the same: you are comparing your signal to the noise to see how much the signal stands out.

Just like with the 1-sample t-test, for any given difference in the numerator, as you increase the noise value in the denominator, the t-value becomes smaller. To determine that the groups are different, you need a t-value that is large.

What Do t-Values Mean?

Each type of t-test uses a procedure to boil all of your sample data down to one value, the t-value. The calculations compare your sample mean(s) to the null hypothesis and incorporates both the sample size and the variability in the data. A t-value of 0 indicates that the sample results exactly equal the null hypothesis. In statistics, we call the difference between the sample estimate and the null hypothesis the effect size. As this difference increases, the absolute value of the t-value increases.

That’s all nice, but what does a t-value of, say, 2 really mean? From the discussion above, we know that a t-value of 2 indicates that the observed difference is twice the size of the variability in your data. However, we use t-tests to evaluate hypotheses rather than just figuring out the signal-to-noise ratio. We want to determine whether the effect size is statistically significant.

To see how we get from t-values to assessing hypotheses and determining statistical significance, read the other post in this series, Understanding t-Tests: t-values and t-distributions .

You Might Also Like

  • Trust Center

© 2023 Minitab, LLC. All Rights Reserved.

  • Terms of Use
  • Privacy Policy
  • Cookies Settings

The t-test is a statistical test procedure that tests whether there is a significant difference between the means of two groups.

t-Test

The two groups could be, for example, patients who received drug A once and drug B once, and you want to know if there is a difference in blood pressure between these two groups.

Types of t-test

There are three different types of t-tests. The one sample t-test , the independent-sample t-test and the paired-sample t-test .

Types of t-test

One sample t-Test

When do we use the one sample t-test (simple t-test) ? We use the one sample t-test when we want to compare the mean of a sample with a known reference mean.

One sample t-Test

Example of a one sample t-test

A manufacturer of chocolate bars claims that its chocolate bars weigh 50 grams on average. To verify this, a sample of 30 bars is taken and weighed. The mean value of this sample is 48 grams.

Example one  sample t-test

We can now perform a one sample t-test to see if the mean of 48 grams is significantly different from the claimed 50 grams.

t-test for independent samples

When to use the t-test for independent samples ? We use the t-test for independent samples when we want to compare the means of two independent groups or samples. We want to know if there is a significant difference between these means.

t-test for independent samples

Example of a t-test for independent samples

We would like to compare the effectiveness of two painkillers, drug A and drug B .

Example of a t-test for independent samples

To do this, we randomly divide 60 test subjects into two groups. The first group receives drug A , the second group receives drug B . With an independent t-test we can now test whether there is a significant difference in pain relief between the two drugs.

Paired samples t-Test

When to use the t-test for dependent samples (paired t-test) ? The t-test for dependent samples is used to compare the means of two dependent groups.

Paired samples t-Test

Example of the t-test for paired samples

We want to know how effective a diet is. To do this, we weigh 30 people before the diet and exactly the same people after the diet.

Example of the t-test for paired samples

Now we can see for each person how big the weight difference is between before and after . With a dependent t-test we can now check whether there is a significant difference.

Dependent vs. independent sample

In a dependent sample (paired sample), the measured values are available in pairs. The pairs are created, for example, by repeated measurements on the same persons. Independent samples (unpaired sample) result from persons and measurements that are independent of each other.

Paired vs unpaired sample

The t-test for dependent samples is very similar to the t-test for one sample. We can also think of the t-test for dependent samples as having a sample that was measured at two different times. As shown in the following image, we then calculate the difference between the paired values and get a value for one sample.

t-test for one sample and t-test for dependent sample

Once we get -5 , once +2 , once -1 and so on. Now we want to check whether the mean of the just calculated differences deviates from a reference value. In this case, zero. And that is exactly what the t-test does for a sample.

Assumptions

What are the assumptions to be able to calculate a t-test in the first place? First, of course, we must have a suitable sample.

  • For the one sample t-test we need a sample and a reference value.
  • In an independent t-test, we need two independent samples.
  • And with the paired t-test, we need a dependent sample.

The variable for which it is to be tested whether there is a difference between the means must be metric. Metric variables are e.g. age, body weight, income. A non-metric variable is, for example, a person's school-leaving qualification (Secondary School, High School,...).

Furthermore, the metric variable must be normally distributed in all three variants of the t-test.

Assumptions t-test

You can find out how to test whether your data are normally distributed in the tutorial on testing for normal distribution .

For the dependent t-test, the variances in the two groups must still be approximately equal. You can check whether the variances are equal with the Levene test .

So what are the hypotheses for the t-test? Let's start with the one sample t-test.

t-test for one sample

In the one sample t-test, the null hypothesis and the alternative hypothesis are:

  • Null hypothesis: The sample mean is equal to the given reference value (so there is no difference).
  • Alternative hypothesis: The sample mean is not equal to the given reference value (so there is a difference).

What about the t-test for independent samples? In the independent t-test, hypotheses are:

  • Null hypothesis: The means in the two groups are equal (so there is no difference between the two groups).
  • Alternative hypothesis: The mean values in the two groups are not equal (i.e. there is a difference between the two groups).

t-test for paired samples

And finally, the t-test for paired samples. In the paired t-test, the hypotheses are:

  • Null hypothesis: The mean of the differences between the pairs is zero.
  • Alternative hypothesis: The mean of the differences between the pairs is non-zero.

Why do we need a t-test?

Let's say we have made a hypothesis:

There is a difference in the duration of studying between men and women in Germany.

Our basic population is therefore all graduates of a degree programme in Germany. Since we cannot, of course, survey all graduates, we draw a sample that is as representative as possible.

t-Test Sample

With the t-test we now test the null hypothesis that there is no difference in the population.

If there is no difference in the population, then we will certainly still see a difference in study duration in the sample. It would be very unlikely that we would draw a sample where the difference is exactly zero.

Why do you need a t-test

In simple terms, we now want to know at what difference, measured in the sample, we can say that the length of study of men and women is significantly different. And this is exactly what the t-test answers.

Calculate t-test

How do you calculate a t-test? First the t-value is needed:

To calculate the t-value, we need two values. First, we need the difference of the means and second, the standard deviation from the mean. This value is called the standard error.

Calculate t-value

In the sample t-test , we calculate the difference between the sample mean and the known reference mean. s is the standard deviation of the data collected and n is the number of cases.

t-value in the one sample t-test

s divided by the square root of n is then the standard deviation from the mean or the standard error.

t-value standard error

In the t-test for independent samples , the difference is simply calculated from the difference of the two sample means.

t-value in t-test independent samples

To calculate the standard error, we need the standard deviation and the number of cases of the first and the second sample.

Depending on whether we can assume equal or unequal variances for our data, there are different formulas for the standard error. More on this in the tutorial on the t-test for independent samples .

With a paired samples t-test , we only need to calculate the difference of the paired values and calculate the mean from this. The standard error is then the same as in the t-test for one sample.

t-value in paired samples t-test

Interpret t-value

Regardless of which t-test we calculate, the t-value becomes larger the greater the difference between the means. In the same way, the t-value becomes smaller when the difference between the means is smaller.

Interpret t-value

Also, the t-value becomes smaller if we have a larger dispersion of the mean values. So the greater the scatter of the data, the less a given mean difference matters!

The t-value and the null hypothesis

We now want to use the t-test to find out whether we reject the null hypothesis or not. To do this, we can use the t-value in two ways. Either we read the so-called critical t-value from a table or we simply calculate the p-value with the help of the t-value.

The t-value and the null hypothesis

Let's start with the method involving the critical t-value, which we can read from a table. To do this, we first need the table of critical t-values , which we can find on datatab.net, under "Tutorials" and "t-distribution". Let's start with the two-sided case first, which is a one-sided or directed hypothesis. Below we see the table.

Table t-distribution

First we have to determine which significance level we want to use. Here we choose a significance level of 0.05, i.e. 5%. Then we have to look in the column at 1-0.05, so at 0.95.

Now we need the degrees of freedom. In the one sample t-test and the dependent-sample t-test, the degrees of freedom are simply the number of cases minus 1. So if we have a sample of 10 people, we have 9 degrees of freedom. In the independent samples t-test, we add the number of people from the two samples and calculate minus 2 because we have two samples. It should be noted that the degrees of freedom can also be determined in other ways, depending on whether one assumes equal or unequal variance.

t-test Degrees of freedom

So if we have a significance level of 5% and 9 degrees of freedom, we get a critical t-value of 2.262.

On the one hand, we have now calculated a t-value with the t-test, and then we have the critical t-value. If the calculated t-value is greater than the critical t-value, we reject the null hypothesis. Suppose we have calculated a t-value of 2.5. This value is greater than 2.262 and thus the two means are so far apart that we can reject the null hypothesis.

On the other hand, we can also calculate the p-value for the t-value we calculated. If we enter 2.5 for the t-value and 9 for the degrees of freedom at the green marked region of the image, we get a p-value of 0.034. The p-value is smaller than 0.05 and thus we also reject the null hypothesis in this way.

Calculate p-value for t-test

As a check, if we enter the t-value of 2.262, we get exactly a p-value of 0.05, which is exactly the limit.

t-test significant p-value

Calculate the t-Test with DATAtab

If you want to calculate a t-test with DATAtab , all you have to do is copy your own data into the table, click on "Hypothesis Test" and then select the desired variables.

Calculate t-test online

For example, if you want to check whether gender has an influence on income, simply click on both variables and a t-test for independent samples is automatically calculated. You can then read the p-value at the bottom.

t-test results

If you are still unsure how to interpret the results, you can simply click on "Interpretation in words":

Directed and undirected hypothesis

The final question that now arises is what is the difference between a one tailed or directed hypothesis and a two tailed or undirected hypothesis. In the undirected case, the alternative hypothesis is that there is a difference between, e.g. men's and women's wages.

one tailed t-Test

In this case, we are not interested in which of the two earns more, we only want to know whether there is a difference or not. With a directed hypothesis, we are also interested in the direction of the difference. The alternative hypothesis is then, for example, men earn more than women or women earn more than men.

If we look at this graphically with the t-distribution, we see that in the two-sided case we have one range on the left and one on the right. We want to reject the null hypothesis if we are in either of them. At a significance level of 5%, both ranges have a probability of 2.5%, so together they have 5%.

When we test a one-sided t-test, we only reject the null hypothesis if we are in this range, always depending on the sign (the side) we are testing. In that case, With a significance level of 5%, the entire 5% then falls within this range.

One-sided t-test

Statistics made easy

  • many illustrative examples
  • ideal for exams and theses
  • statistics made easy on 412 pages
  • 5rd revised edition (April 2024)
  • Only 8.99 €

Datatab

"Super simple written"

"It could not be simpler"

"So many helpful examples"

Statistics Calculator

Cite DATAtab: DATAtab Team (2024). DATAtab: Online Statistics Calculator. DATAtab e.U. Graz, Austria. URL https://datatab.net

COMMENTS

  1. One Sample t-test: Definition, Formula, and Example

    Fortunately, a one sample t-test allows us to answer this question. One Sample t-test: Formula. A one-sample t-test always uses the following null hypothesis: H 0: μ = μ 0 (population mean is equal to some hypothesized value μ 0) The alternative hypothesis can be either two-tailed, left-tailed, or right-tailed:

  2. One Sample T Test: Definition, Using & Example

    One Sample T Test Hypotheses. A one sample t test has the following hypotheses: Null hypothesis (H 0): The population mean equals the hypothesized value (µ = H 0).; Alternative hypothesis (H A): The population mean does not equal the hypothesized value (µ ≠ H 0).; If the p-value is less than your significance level (e.g., 0.05), you can reject the null hypothesis.

  3. One-Sample t-Test

    Figure 8: One-sample t-test results for energy bar data using JMP software. The software shows the null hypothesis value of 20 and the average and standard deviation from the data. The test statistic is 3.07. This matches the calculations above. The software shows results for a two-sided test and for one-sided tests.

  4. An Introduction to t Tests

    The null hypothesis (H 0) is that the true difference between these group means is zero. The alternate hypothesis (H a) ... or use a one-sample t-test to compare the group mean to a standard value. If you are studying two groups, use a two-sample t-test. If you want to know only whether a difference exists, use a two-tailed test.

  5. One Sample T Test

    Example: H0: Sample mean (x̅) = Hypothesized Population mean (µ) H1: Sample mean (x̅) != Hypothesized Population mean (µ) The alternate hypothesis can also state that the sample mean is greater than or less than the comparison mean. Step 2: Compute the test statistic (T) t = Z s = X ¯ - μ σ ^ n.

  6. 8.5: One sample t-test

    Step 3. Carry out the calculation of the test statistic. In other words, get the value of t t from the equation above by hand, or, if using R (yes!) simply identify the test statistic value from the R output after conducting the one-sample t-test. Step 4. Evaluate the result of the test.

  7. T Test Overview: How to Use & Examples

    Use a one-sample t test to compare a sample mean to a reference value. It allows you to determine whether the population mean differs from the reference value. The reference value is usually highly relevant to the subject area. ... One-Sample T Test Hypotheses. Null hypothesis (H 0): The population mean equals the reference value ...

  8. How t-Tests Work: 1-sample, 2-sample, and Paired t-Tests

    Here's what we've learned about the t-values for the 1-sample t-test, paired t-test, and 2-sample t-test: Each test reduces your sample data down to a single t-value based on the ratio of the effect size to the variability in your sample. A t-value of zero indicates that your sample results match the null hypothesis precisely. Larger ...

  9. One Sample T Test (Easily Explained w/ 5+ Examples!)

    00:13:49 - Test the null hypothesis when population standard deviation is known (Example #2) 00:18:56 - Use a one-sample t-test to test a claim (Example #3) 00:26:50 - Conduct a hypothesis test and confidence interval when population standard deviation is unknown (Example #4) 00:37:16 - Conduct a hypothesis test by using a one-sample t ...

  10. SPSS Tutorials: One Sample t Test

    The One Sample t Test compares a sample mean to a hypothesized value for the population mean to determine whether the two means are significantly different. ... Since p < 0.001, we reject the null hypothesis that the mean height of students at this college is equal to the hypothesized population mean of 66.5 inches and conclude that the mean ...

  11. T-test and Hypothesis Testing (Explained Simply)

    The one-tailed t-test can be appropriate in cases, when the consequences of missing an effect in the untested direction are negligible, or when the effect can exist in only one direction. So… David has calculated a p-value. It equals 0.7805. Because David set α = 0.8, he has to reject the null hypothesis. That's it. The t-test is done.

  12. One sample t test

    One sample t test: Overview. The one sample t test, also referred to as a single sample t test, is a statistical hypothesis test used to determine whether the mean calculated from sample data collected from a single group is different from a designated value specified by the researcher. This designated value does not come from the data itself ...

  13. An Introduction to the One Sample t-test

    The alternative hypothesis assumes that some difference exists between the true mean (μ) and the comparison value (m0), whereas the null hypothesis assumes that no difference exists. The purpose of the one sample t-test is to determine if the null hypothesis should be rejected, given the sample data. The alternative hypothesis can assume one ...

  14. One-Sample T-Test

    A one-sample t-test evaluates if a population mean is likely to be x: some hypothesized value. One-Sample T-Test Example. ... We just found that p = 0.73 so we don't reject our null hypothesis: given our sample data, the population mean being 100 is a credible statement.

  15. 4.1: One-Sample t-Test

    How the test works. Calculate the test statistic, ts t s, using this formula: ts = (x¯ −μθ) (s/ n−−√) (4.1.1) (4.1.1) t s = (x ¯ − μ θ) (s / n) where x¯ x ¯ is the sample mean, μ μ is the mean expected under the null hypothesis, s s is the sample standard deviation and n n is the sample size. The test statistic, ts t s, gets ...

  16. One Sample T Test: SPSS, By Hand, Step by Step

    Step 1: Write your null hypothesis statement (How to state a null hypothesis). The accepted hypothesis is that there is no difference in sales, so: H 0: μ = $100. Step 2: Write your alternate hypothesis. This is the one you're testing in the one sample t test. You think that there is a difference (that the mean sales increased), so: H 1: μ ...

  17. One Sample t-Test

    Use the t-table to look up a two-tailed test with 29 degrees of freedom and an alpha of 0.05. We find a critical value of 2.0452. Thus, our decision rule for this two-tailed test is: If t is less than -2.0452, or greater than 2.0452, reject the null hypothesis. 5. Calculate Test Statistic

  18. One sample t-test • Simply explained

    One sample t-test. The t-test is one of the most common hypothesis tests in statistics. The t-test determines either whether the sample mean and the mean of the population differ or if two sample means differ statistically. The t-test distinguishes between. The choice of which t-test to use depends on whether one or two samples are available ...

  19. One-Sample T-Test using SPSS Statistics

    The 5-step Compare Means > One-Sample T Test ... Indicates the probability of obtaining the observed t-value if the null hypothesis is correct. Table 4.1: Breakdown of a one-sample t-test statistical statement. You can also include measures of the difference between the two population means in your written report.

  20. Understanding t-Tests: 1-sample, 2-sample, and Paired t-Tests

    The paired t-test and the 1-sample t-test are actually the same test in disguise! As we saw above, a 1-sample t-test compares one sample mean to a null hypothesis value. A paired t-test simply calculates the difference between paired observations (e.g., before and after) and then performs a 1-sample t-test on the differences.

  21. t-Test

    In the one sample t-test, the null hypothesis and the alternative hypothesis are: Null hypothesis: The sample mean is equal to the given reference value (so there is no difference). ... In the one sample t-test and the dependent-sample t-test, the degrees of freedom are simply the number of cases minus 1. So if we have a sample of 10 people, we ...

  22. One Sample T Test: 3 Example Problems

    Since this p-value is less than our significance level α = 0.05, we reject the null hypothesis. We have sufficient evidence to say that the mean exam score on this particular exam is greater than 82. ... Example 3: Left-Tailed One Sample T-Test. Suppose we suspect that the mean height of a particular species of plant is less than the accepted ...

  23. When Do You Reject the Null Hypothesis? (3 Examples)

    We can use the following steps to perform a one sample t-test: Step 1: State the Null and Alternative Hypotheses. We will perform the one sample t-test with the following hypotheses: H0: μ = 310 (population mean is equal to 310 pounds) HA: μ ≠ 310 (population mean is not equal to 310 pounds) 2.

  24. Null hypothesis

    The null hypothesis and the alternative hypothesis are types of conjectures used in statistical tests to make statistical inferences, which are formal methods of reaching conclusions and separating scientific claims from statistical noise.. The statement being tested in a test of statistical significance is called the null hypothesis. The test of significance is designed to assess the strength ...