(Statistically speaking… I’m 95% confident that updating one’s beliefes based on new evidence is a good thing.)
This silly little web app visualizes Bayes’ theorem through a set of sliders and a diagram. The application allows you to manipulate the prior probability of a hypothesis and the likelihood of evidence in both the true and false cases. As you adjust the sliders the graphic and the computed posterior probability update instantly. The goal is to help my students build intuition for how beliefs are updated when new evidence appears.
The notation in the app follows standard probability conventions. For example,
P(E | H) is read as “the probability of the evidence E assuming the
hypothesis H is true.” Likewise, P(E | ¬H) represents the probability of the
evidence when the hypothesis does not hold; the symbol ¬ simply means
“not.”
These quantities feed into Bayes’ theorem to compute the updated belief
P(H | E).
or…
Clone repository, open bayes_interactive.html in your browser.
You can buy this T-shirt from my Threadless Store!
Imagine you’re a detective, and you want to figure out if an article is fake news. Before you even look closely, you probably already have a guess. Maybe you think 5 out of 100 articles you see online are fake news. That’s your starting point, called a prior belief. It’s like your best guess before you dig into the clues.
Now, you find some clues about the article. Let’s say:
These clues are called evidence. They help you decide if the article might be fake.
Here’s the cool part!
You need to ask yourself:
Now, use the clues to adjust your guess. If the clues make fake news much more likely than real news, you should raise your belief that this article is fake.
If the clues don’t help much, you don’t change your guess very much.
After combining what you already believed (your starting guess) with the new clues, you decide: “How likely is it that this article is fake?”
You walk into the kitchen and see that a sandwich is missing.
You wonder:
“Did the dog steal my sandwich?”
You know how your dog usually behaves. He almost never steals food… only about 1 out of 20 times.
So your prior belief is:
You look at the counter and see:
Now you ask:
“Does this clue make it more likely the dog stole the sandwich?”
You know:
So paw prints are a much stronger clue if the dog stole it than if he didn’t. Paw prints are common when the dog is guilty, but rare when he’s innocent.
At first, you thought the dog was guilty only 5% of the time.
But now you’ve found a clue (paw prints) that’s way more likely if the dog did it.
So now, your belief changes:
You think the dog is much more likely to be guilty—maybe 30% or even 50% chance!
You’ve used the clue to update your thinking!
(How likely the evidence is if the hypothesis is true) × (How much you believed the hypothesis to start with) ÷ (How likely the evidence is overall)
| That “vertical bar”… the “** | ” symbol… or the ‘pipe’… is spoken as the word “given**”. |
“The dog stole the sandwich.”
“There are paw prints on the counter.”
This is your starting belief before seeing any evidence.
You think the dog usually doesn’t steal food—maybe 5% of the time.
So:
P(H) = 0.05
This means:
“If the dog did steal the sandwich, how likely is it that we’d see paw prints?”
You know:
The dog leaves paw prints 90% of the time when he’s guilty.
So:
| P(E | H) = 0.9 |
This means:
“What’s the total chance of seeing paw prints, no matter who did it?”
We calculate this by imagining both cases:
a) The dog did it (5% of the time), and in those cases, paw prints appear 90% of the time:
0.9 × 0.05 = 0.045
b) The dog didn’t do it (95% of the time), and in those cases, he still sometimes jumps on the counter (10%):
0.95 × 0.1 = 0.095
Now add both:
P(E) = 0.045 + 0.095 = 0.14
###
So after seeing the paw prints, your belief that the dog stole the sandwich goes from 5% to 32%!
That’s a huge update… but it’s not 100% proof.
The repository will gradually include short tutorials illustrating classical uses of Bayes’ theorem, such as:
These scenarios will highlight how the sliders correspond to the quantities in Bayes’ formula and how the posterior changes.
Inspired by Grant Sanderson’s (3Blue1Brown) video on Bayes’ theorem. https://youtu.be/HZGCoVF3YvM?si=MNDsbUa6a30rxe6d