We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Stay up to date on the topics that matter to you

# The Z Test

Credit: Technology Networks.
Listen with
Speechify
0:00
Thank you. Listen to this article using the player above.

Complete the form below to unlock access to ALL audio articles.

Read time: 6 minutes

The z test is a common statistical test used to compare two groups with respect to their mean (average) values, or to compare one group with a hypothesized value, when the variable of interest is continuous (age or weight, for example). The test helps to draw conclusions about populations based on sampled data.

In this article, we will explore the two types of z test, assumptions of the test, interpretation and a worked example.

## What is z test?

The z test is a commonly used hypothesis test in inferential statistics that allows us to compare two populations using the mean values of samples from those populations, or to compare the mean of one population to a hypothesized value, when what we are interested in comparing is a continuous variable. These two distinct aims of the z test are addressed using the two-sample z test and the one-sample z test, respectively.

It follows that hypotheses for the two tests will differ slightly. For the one-sample z test, the hypotheses are as follows:

• Null hypothesis (H0) is that the population mean is equal to the hypothesized value (µ = µ0)
• Alternative hypothesis (H1) is that the population mean is not equal to the hypothesized value (µ ≠ µ0)

For the two-sample z test:

• Null hypothesis (H0) is that the two population means are equal (µ1 = µ2)
• Alternative hypothesis (H1) is that the two population means are not equal (µ1 ≠ µ2)

## Z test formula

Like many other statistical tests, the z test lets us calculate a test statistic to compare with a critical value, test our hypotheses and find a p-value to test the strength of evidence. The formula to calculate the test statistic will differ depending on which z test you will use. The formula for the one-sample z test is as follows:

Where is the mean in the sample of data, n is the sample size, µ0 is the hypothesized value for the population mean that we want to compare our sample to and σ is the known population standard deviation. The formula works by taking the difference between the sample mean () and the hypothesized value for the population mean (µ0) and dividing it by the standard error of the mean (which represents the uncertainty in how well the sample mean represents the population mean with respect to the sample size).

The formula for the two-sample z test is:

In the above, the numerator (top line) is the difference between the sample means, and the denominator (bottom line) is the pooled standard error of the mean when both sampled groups are combined.

## What are the assumptions of the z test?

Assumptions for the z test are as follows:

• The data in the variables of interest should be continuous. Continuous variables are when the values can take any numeric value, such as age in years, weight in kilograms or temperature in degrees Celsius.
• The data should be drawn from a random sample of the population we are trying to make an inference about. This ensures the dataset is representative and the inference we make from the z test is valid and generalizable.
• The z test assumes that the data take a Normal distribution. For small samples, this can be tested by viewing the data on a histogram plot. For large samples, sampled distributions will approximate to a Normal distribution.
• For the two-sample z test, it must be assumed that the groups are independently sampled, that the analysis compares two distinct datasets.

As previously mentioned, the z test is relevant in practice whenever the population standard deviation is known.

## Z test vs t test and when to use a z test

The z test and Student’s t test are similar in that they both investigate the means of one or two groups, share similar assumptions and allow comparison of two mean values to make conclusions about whether the values differ. The z test may be used for small samples (as well as large samples) as long as the data take a Normal distribution, but when samples are larger (> 30) the results of the z test and t test converge, and the t test is more commonly used in practice. The key difference is that the z test requires the investigator to know the population standard deviation, whereas the t test uses a sample estimate of the standard deviation. Briefly, the standard deviation is a statistical measure that quantifies the spread (or variability) of data around the mean value. In reality, investigators will rarely know the population standard deviation, and if they do it is unlikely there would be a need to find out the population mean and conduct the z test. Therefore, it is much more common that the t test is used in practice. The z test demonstrates some important theory for statistical inference and hypothesis testing and is often the starting point for introductory statistics courses.

Another difference between the z test and t test relevant to the z test’s interpretation is that it uses the Normal distribution to derive the critical value. This is in contrast with the t test, which uses the t-distribution; a probability distribution that changes as sample size changes.

## Z-test interpretation

As the z test uses a fixed Normal distribution to calculate the critical value and p-value, the z test does not use degrees of freedom (the number of independent bits of information in a statistical model or test, which informs the test statistic and p-value). For the z test, the critical value and p-value will be derived according to the significance level and whether it is a one- or two-tailed test. The two-sided p-value (from the two-tailed test) is usually of more interest as it gives both directions of the effect. The critical value and p-value can be used to test whether any difference between the mean of the two groups being compared is likely to be due to chance.

## Z test example

For our worked example of a z test by hand, let us imagine a company is conducting a clinical trial investigating the effectiveness of a new drug candidate in reducing systolic blood pressure of patients with hypertension (measured in millimeters of mercury, mmHg).

Figure 1: Test subject having their blood pressure taken. Credit: Technology Networks.

In our example, we have two groups of patients (one who took the new drug, and one who took a placebo), with systolic blood pressure (continuous outcome variable) measured for both groups (Figure 1). The investigators would like to test to see if there is a difference in blood pressure between the two groups, so they conduct a two-sample z test.

They find that the drug group (which contained n1 = 25 patients) had a mean change of 37 mmHg (1) and the placebo group (which contained n2 = 21 patients) had a mean change of 26 mmHg (2). They also knew the population standard deviation to be σ = 15 for both groups.

• The null hypothesis is that systolic blood pressure is the same in each group (µ1 = µ2)
• The alternative hypothesis is that systolic blood pressure is different between the groups (µ1 ≠ µ2)

The z test statistic (Z) can be calculated using the formula:

The z test statistic (also called the z score) is equal to 2.47. Next, we identify the critical value under the specific test conditions. Assuming we are interested in a two-tailed test and a 0.05 significance level (standard in statistical testing), our critical value is always 1.96 (since we use the fixed Normal distribution here, this critical value is unchanged by degrees of freedom, unlike in other tests).

Finally, to find the p-value we can look up the z-score and critical value in a z table. To do this, we take the negative of the z score and then split it into the tenth’s place (2.4) that corresponds to the row headings in the z table, and hundredth’s place (0.07), which corresponds to the column headings, making up the sum of the critical value (2.4 + 0.07 = 2.47). Note that in practice this would be calculated for us by statistical software. We find our two-sided p-value in this case is 0.0068, which can be presented as p < 0.001. Based on this very small p-value we can reject the null hypothesis and conclude that there is a difference in mean change in systolic blood pressure between the drug and placebo groups, and that there is strong evidence that it is unlikely to be due to chance.