Vaccine Efficacy in plain English

Nishant Deshpande
7 min readMar 29, 2021

(Also how to be confident in test results for those that have the patience..)

AstraZeneca has 74% efficacy.

Pfizer has 95% efficacy.

Moderna has 93% efficacy.

What exactly does this mean? It is actually (I think) completely intuitive — not some math that the experts need to do.

I looked for the plain English version that my high school kids could understand. And I didn’t find one. I’m not sure if Google is just not good any more or I didn’t search deeply (more than a few minutes :)) because it can’t be that it doesn’t exist… either way, I though I’d just write it as I understand it — in plain English.

Simple Efficacy 1

Lets construct a trial with round numbers because we don’t actually have to do the real thing where people drop out and accidents happen and what not.

Enrolled: 20,000
Placebo: 10,000
Actual Vaccine: 10,000

After 3 months,

Placebo people with Covid: 100
Actual vaccine people with Covid: 10

What is the efficacy?

So, assuming everyone in the trial had on average equal chance of getting Covid (without the vaccine), the vaccine saved 100-10 (100 minus 10) people from getting Covid. I.e. 90 people of a potential 100 people were “saved”.

Efficacy = 90 / 100 = 90%.

Done.

Simple Efficacy 2

Lets take some different numbers (not so round) just to illustrate again.

Enrolled: 20,000
Placebo: 10,000
Actual Vaccine: 10,000

After 3 months,

Placebo people with Covid: 110
Actual vaccine people with Covid: 35

What is the efficacy?

Now, the vaccine prevented (110–35 =) 75 Covid cases out of 110. (We’re kind of saying if the placebo got 110 cases, we would expect the other group to have the same without any effect from the vaccine.)

Efficacy = 75 / 110 = 68.18%

Note the difference between 90% and 68.18% — its a lot of cases.

Efficacy Differences simplified

Lets take some more examples of efficacy differences and learn to compare efficacy quickly and intuitively.

Lets assume
Vaccine A has efficacy 90%.
Vaccine B has efficacy 78%.

So, lets take the placebo as having 100 Covid cases to simplify our mental burden. So vaccine A had 10 Covid cases (90% efficacy). And vaccine B had 22 covid cases (78% efficacy).

So really, the efficacy percentages are with respect to no vaccine. If you compare vaccine A against vaccine B, I think it would be fair to say vaccine B is more than twice as good at preventing Covid as vaccine A.

Confidence — in somewhat plain English

Ok. So far, so good. We got efficacy with some basic math.

What about confidence in the results?

Notice that at no point in the above calculations was the number of people in the study used.

So, first, to get enough people to get Covid naturally, a large number of people are required. Because, even though the headlines might make it feel like it, its not all that likely that most people will get Covid in a short space of time.

But leaving that aside, lets do a smaller vaccine study. Lets do it with 2,000 instead of 20,000 people.

We might expect results scaled down by 10 as well.

Enrolled: 2,000
Placebo: 1,000
Actual Vaccine: 1,000

After 3 months,

Placebo people with Covid: 10
Actual vaccine people with Covid: 1

What is the efficacy?

Well, now the vaccine saved 9 people from Covid, out of 10. So… 9/10 = 90%.

So, we have the same efficacy. So why go to all the trouble of 20,000 people?

We know, instinctively, that we were more confident of our results with the 20,000 people trial vs the 2,000 people trial.

Lets see if we can quantity our instinct.

Confidence — quantifying our instinct

First, lets see what happens if the results change slightly. Lets say one extra person got Covid in the vaccine group in each case, and one less in the placebo group.

For the 20,000 person trial, our numbers would hardly change.

Placebo people with Covid: 99
Actual vaccine people with Covid: 11

Efficacy = (99–11) / 99 = 88 / 99 = 88.89% (vs 90% before, a change of 1.11%)

For the 2,000 person trial, our numbers are quite different.

Placebo people with Covid: 9
Actual vaccine people with Covid: 2

Efficacy = (9–2)/9 = 7/9 = 77.78% (vs 90% before, a change of 12.22%)

So, we can see that with a 10x smaller study population, small changes have a 10x bigger impact, and so with the smaller study we are much less confident of our 90% efficacy.

A less effective vaccine

To really quantify how confident we are, we could so something like above (make a small change and see how much the end numbers changed). I think its quite nice and simple.

But lets say that our vaccine was not as effective. Lets take the second study (2,000 enrolled) but change the numbers a lot.

Enrolled: 2,000
Placebo: 1,000
Actual Vaccine: 1,000

After 3 months,

Placebo people with Covid: 10
Actual vaccine people with Covid: 7

Efficacy = (10–7)/10 = 3/10 = 30%.

Now that’s not so great. But still, if that’s all we had, we should still go ahead and start vaccinating everyone right? (There is the serious Covid effects to consider i.e. hospitalizations… but we’re not going there in this simple math blog).

But your instinct is saying, wait… 10 and 7 are just too small numbers. Their difference isn’t much. Our “small change” will reduce the efficacy drastically. Lets do the same change (+1 for vaccinated, -1 for unvaccinated).

Efficacy = (9–8)/9 = 1/9 = 11%

So? Now the efficacy is very small. But still, its positive… so lets vaccinate?

The null hypothesis

Quite apart from debating if 11% or 30% or some such number is ‘large enough’, we need to quantify how confident we are that the vaccine even works. What if the 7 and 10 was just luck? I mean, if you toss a coin 17 times, do you think its possible you could get heads 7 times and tails 10 times? Yes, totally possible. (How possible will be interesting… for later).

So, what we want to know is… if the vaccine was just saline, what are the chances we would see some differences in the number of people with Covid in the placebo and the non-vaccine group?

Or another way to put it is: What are the chances that our results (10 and 7 in the two groups) are just chance, not due to the vaccine? This is called the “null hypothesis”. In high-school jargon, the chances that the vaccine is “just null” :).

And this is where the binomial theorem comes in. It gives us a way to quantify coin tosses and how probable some results are vs others.

Using the binomial theorem

Lets look at our data.

7 positives out of the 1,000 vaccinated group.
10 positives out of the 1,000 placebo group.

Total: 17 out of 2,000.

So, lets just combine both groups. If the vaccine is “null”, then both groups are the same. So the chance for any person in the trial of getting Covid is 17/2,000 = 0.0085 (0.85%).

If this is the true chance of getting Covid, then what is the probability that if we took 1000 people, we would get 7 positives or 10 positives in that group of 1000?

IF this chance is high, then we can’t rule out that the 7 and 10 we got was just chance. And so we can’t rule out the “null hypothesis”. Which translates to: We can’t rule out the vaccine is completely ineffective.

So what is that chance?

Well, the expected number of Covid cases with a per person chance of 0.85% with 1000 people is 0.0085 * 1,000 = 8.5. So we can already see that its actually quite close to 7.

But you go to a binomial test calculator online (https://stattrek.com/online-calculator/binomial.aspx) and put in the numbers.

So what the numbers are telling us is that there is a 38.5% chance that we will have 7 or less positives (and equivalently 61.5% chance of more).

Lets not worry about the chance of getting exactly 7 positives for reasons we will not go into right now.

So, that’s a pretty high chance that the results we saw were just chance.

The threshold commonly used to say that the results are unlikely to be chance is 5%. This is what you’re seeing when you see the (p<0.05) in papers etc.

Lets use 20,000 participants in this study now, keeping the efficacy at 30%. Lets just scale up the numbers and see what happens.

70 positives out of the 10,000 vaccinated group.
100 positives out of the 10,000 placebo group.

Total: 170 out of 20,000.

If the vaccine is not effective (“null hypothesis”), we can mix the groups to get the per person Covid probability. It will be the same as the 2,000 person trial since all our ratios are the same.

170 / 20,000 = 0.0085 (= 17 / 2,000 as before)

You can already see that now, the expected number of Covid cases per 1,000 people would be 0.0085 * 10,000 = 85. So, we have 70 and 100 v an expected 85. Is this a big difference? Now my intuition is not as clear. Lets use the binomial test calculator.

Now you can see that the probability of getting 70 or less positives in a 1,000 person population is 0.0538 (5.38%). Before we had 38.5%.

So here, we are much more sure (94.62% sure in some sense) that these results could not have happened due to pure chance.

And that this vaccine is most probably having some effect.

More things to think about

Hopefully that gave you some intuition about efficacy and confidence and such. There is a lot more to learn here.

I find statistics really fascinating (especially because I don’t know that much…).

So even if you’re not a stats PhD, you will be able to reason intelligently about the data you see in the real world and demand better analysis than the single number headlines trying to convey a biased message out there.

--

--