Learn more about probability – and so much more – at . My thanks to Brilliant for sponsoring today’s video.
Check out my MATH MERCH line in collaboration with Beautiful Equations
►
What, exactly, is probability? In this video we will see a few different perspectives on chance, the classical or a priori viewpoint, the frequentist or empirical viewpoint, and finally the Bayesian or subjective viewpoint. We’re even going to consider the game of chess, what appears to be overwhelmingly a game of skill, but which the limits of our bounded rationality still results in probabilistic elements.
0:00 Intro to Probability
0:50 Classical Probability
2:09 Frequentist Probability
5:18 Bayesian Probability
7:14 Is Chess a game of chance?
11:05 Underestimate the role of chance
11:47 Brilliant.org/treforbazett
Classical Probability: This is when we have a finite set of equally likely possibilities, and thus the probability of an event A is just the number of times A occurs out of all the possibilities. This is great when we know everything about a situation like a normal deck of cards.
Frequentist Probability: This is when we do emperical studies and see how frequently event A occurs, particularly in the limit as we do a large number of studies. This is great when we don’t know a priori exactly what is going on like some deck with an unknown number of cards removed, but the downside is we need to be able to do a large number of trials.
Bayesian Probability: This is when we begin with a prior probability and when we gain new information we update our worldview (often using Bayes’ Theorem) to get a posterior probability. This viewpoint is subjective because you and I may get different results given different information and different even if we had the same prior probability.
COURSE PLAYLISTS:
►DISCRETE MATH:
►LINEAR ALGEBRA:
►CALCULUS I:
► CALCULUS II:
►MULTIVARIABLE CALCULUS (Calc III):
►VECTOR CALCULUS (Calc IV)
►DIFFERENTIAL EQUATIONS:
►LAPLACE TRANSFORM:
►GAME THEORY:
OTHER PLAYLISTS:
► Learning Math Series
►Cool Math Series:
BECOME A MEMBER:
►Join:
MATH BOOKS & MERCH I LOVE:
► My Amazon Affiliate Shop:
SOCIALS:
►Twitter (math based):
►Instagram (photography based):
glad i came across this video, where did you get your chess set from if i may ask
Good topic! I always gave chess as example for a game that there is no luck, but you've changed my mind a little bit. I say a little bit because it is still at the top of my list for games with least luck involved. In every sport the more skill/experience you have the less you have to rely on luck.
How can he play chess without the pawns? Tartakover used to say "I have never beaten a wholly well opponent". Does that mean his chances of winning were in part dependent on the probable state of health of his opponent?
10:41 I disagree. When it's tournament time online, some lower rated accounts develop this accurate, machine-like, calculation ability. Truly fascinating
derp
Do more chess videos!!!!!!!!!
I REPEAT
DO. MORE. CHESS VIDEOS.
6:51 or you could use Bayesian Probability and Philosophically say that these updates are merely for our subjective, Epistemological use and that the card is, in actuality, either a heart or it isn't
I feel like I just read this chapter in Nate Silver's book "The Signal and the Noise."
Chess is not a game of probabilities. Playing lots of chess games is though. When we say we have "winning chances", we mean the opponents position is hard to play for a human and he might mess up.
And this is a large part of what makes chess interesting. If every chess game was the same, no one would play it. I'm often reminded of the futurama scene where two robots are playing, and on the first move one says to the other "mate in 143 moves", and his opponent gives up: https://www.youtube.com/watch?v=XtgZKwK6C3U.
no is not
Rolling a die is, strictly speaking, not random. If we had perfect information and understanding of the imperfections of the die and the spin we impart to it and the air pressure changes and tremors on the table, etc etc etc….but of course we don't so it's far more practical to treat it as evenly random. I like the explanation that choosing a probability method is like choosing a philosophical approach.
I had considered this notion when someone was explaining to me that StarCraft was a game without randomness (as in, no random damage values or hit chances, the only randomness that could be considered baked into the game is pathing). Somewhat cheekily I responded that your opponent's behavior is functionally random because there are simply far too many possible moves for you to consider, so you have to treat them as random possibilities weighted based on what you expect them to do. I'm glad to see I wasn't totally off track.
This video is clickbait. Just look at the thumbnail. Never listen to a frequentist. Kony 2012
Nice presentation, I think it’s worth mentioning that many people have used Bayes classifiers in chess engines to improve their evaluation function, or even to adapt to the player’s personal style.
Quit a stupid distinction and video. Of course probabilities are either known (a result of a willfull process) or estimated. And of course the limited time forces one to always use estimated probabilities when concerning a natural proces. So this video is saying "in bayesian you know the probability" eh, nope, you assume them! A waste of time sowwy
No, Chess is not a game of chance. lol Like everything else in life, the more you work at it, the better you become at it. You will lose hundreds of thousands of times. The variables are way to many to calculate, because it depends on the person, their experience, their understanding, their style of play, etc. Considering most masters can play an entire chess game in their head, and many can play dozens or more in their head at tge same time, should tell you something.
Or in Star Wars terms: Han Solo, frequentist ("I just call it luck"), Obi Wan Kenobi, classicist ("In my experience, there is no such thing as luck") as well as a Bayesian ("Use the Force, Luke.")
Feynman's path integrals
1.) Horizon effect. You have very little idea for example if 1. d4 or 1. e4 is better. At some point you have to make a guess.
2.) Opening preparation. If my opponent plays a strong move I haven't looked at in a while I might make mistakes but if he plays a strong move I just reviewed then he'll essentially be playing against an engine for a while.
'Chess,' said the Dutch grandmaster, Jan Hein Donner, 'is as much a game of chance as blackjack; or tossing cards into a top hat.' There was a pained silence, then a polite babel of disagreement: it was a game of the utmost skill; a conflict between disciplined minds in which victory would inexorably go to the more perceptive, the more analytical player; a duel of the intellect in which luck played no part. Donner shrugged, lit another cigarette and said: 'Believe that if you like.' Bent Larsen smiled the smile of a man who had heard his friend air such iconoclastic arguments in the past but was quite happy to contest them again, when the score of the fifth game of the World Championship match between Karpov and Korchnoi was brought in. Both men pulled out of their inside pockets the wallet sets all grandmasters seem to carry at all times and began to skim through the moves.
It happened that the teleprinter tape had been torn off after Karpov's 54th move as Black […]. They studied the position for a few moments, mated Karpov in four moves and were surprised when another whole sheet of moves was brought from the teleprinter.
When they saw Korchnoi's 55th move – Be4+ – Larsen's eyebrows went up.
'There you are,' Donner said, quietly and without triumph as though some self-evident truth had been revealed, 'pure luck'.
Your explanation of Bayes is terrible, if not wrong altogether.
The point is not to update our belief B (card is hearts) to 100%.
After seeing B (card is hearts), we update our belief in A (some hypothesis about the deck) to A|B – our belief in hypothesis A after seeing B.
If A is "there are no hearts in the deck", seeing B disproves it. If A is "there are only hearts in the deck", B supports it – but B also supports "1/4 of the cards are hearts", etc.
The Bayes theorem tells us exactly how much we should increase our confidence in hypothesis A after seeing evidence B. And the answer is (how much hypothesis A predicted B)*(how likely hypothesis A was) divided by (how likely B was in general).
P(A|B) = P(B|A)*P(A) / P(B)
If event B is predicted only by hypothesis A (novel prediction), seeing B is strong evidence for A.
If B is predicted by many different hypotheses, seeing B is weak evidence for A and cannot be used to distinguish between hypotheses.
If there are events B, not predicted by hypothesis A, A is falsifiable (there are no hearts in the deck, but then we draw a heart – A is wrong).
If hypothesis A can explain every outcome we can think of, it can never be proven wrong, but it is also useless – it cannot restrict our expectations for the future.
So, a Bayesian approach to chess would be saying the Spanish defense is a good strategy and playing a game (or 100) using it. If you win, you increase your confidence in the Spanish Defense, while acknowledging that maybe you lucked out and your opponent was weaker than you. It is said that all learning is Bayesian in nature.
For the ones still reading – you might enjoy lesswrong.com – the blog community of Eliezer Yudkowsky, the guy who coined the phrase Friendly AI, among other things.
You might enjoy "Harry Potter and the Methods of Rationality" ( hpmor.com ) even more.
You and most of the commenters here don't understand chess. This has nothing to do with skill level. Steinitz, almost 200 years ago, discovered chess theory that holds true to this day. Lasker, elaborated on those ideas in 1906 in his book "Struggle" (English edition) and then expanded even further in 1913 with "Das Begreifren der Welt" (I think only the German edition exists) and later "Die Philosophie des Unvollendbar" in 1918. And then there is Game Theory, …"the study of mathematical models of strategic interactions among RATIONAL agents", [wikipedia]. Random dealings of cards are nothing like rational agents. Chess has zero elements of chance.
Cool video! There’s an awesome book called the master algorithm that talks a lot about this sort of thing, it has a whole chapter dedicated to Bayesian inference and goes over several other algorithms as well and how they’re implemented into modern compuers
I just came here to say that at least on the thumbnail you look like a dark haired version of linus sebastian
5:48 "The point here is that it's subjective"
No, I think you are wrong. The word you are looking for is "relative" not "subjective"
For instance, position and distance are relative to the frame of reference, but they are not subjective. There's no opinion.
Of course chess is a game of chance. Sometimes when I think I blundered a piece it happens to be defended by chance. Sometimes I make a fork by chance. Sometimes I checkmate my opponent by chance. Need I go on?
I was expecting this to dive deeper into how to measure probabilities at the edge of deterministic prediction. Maybe quantify why and how a slight edge in player skill effects the odds of winning.
Frequentists always seem like a strawman to me. Does anyone really ascribe to it?
I appreciate the positive aspects of the video but at one point it conflates 'Bayesian' and 'subjective', and it strongly implies that frequentists somehow reject the Bayes theorem.
Your mic is distorting
Nice DeLand deck you have there. It took till almost the end of the video for me to verify that's what it is. I have one still to this day though I don't use it at all.
I guess arguing bad luck is a way to rationalize consistent failure.
I think what is important here is to establish the boundaries of the field we are thinking in. So for example. Without such boundaries we can say EVERYTHING is probabilistic. For example we play a game of chess, but there is a chance I will be feeling better at the moment of the game and thus – perform better. Or I could be feeling worse, maybe I have a headache, thus performing worse. Without established boundaries of the sphere we think in – we will always engage in mish-mushy unclear conclusions. The good thing about chess is that it gives you the OPTION to not rely on chance, if you can seize it. You can always hope for the chance of the opponent missing a move, or making a blunder, but that is a very bad strategy which will lead you nowhere on the long run. So in my humble opinion the key element here is the POTENTIAL for the game to be fair, without chance involved, but the subjective human factor, can introduce chance, willingly or unwillingly.
Whoever wrote this, I assure you, is not a good chess player
Coming back to this, I know what makes this video so dumb, and the video maker so stupid, he never defines game of chance and he never explains how something could possibly not be a game of chance, if you argue that the variance in your own perfomance is enough to define something as a game of chance you've simply created the dumbest most useless defintion you possibly could, you tried to be clever, you failed.
5:09 But… you actually can Dr. Strange
I've stumbled upon this channel by accident, but I have to say it's really top-tier popular science channel on YouTube. I hope you get much bigger audience as you deserve! Interesting, original topic, good explanation, helpful graphics, well done editing. Nice job!
For the frequentist probability, I really like the way of presenting. But I am not sure if I agree with the shuffling part is only shuffling the selected cards. I think the process is 1. reshuffle the entire cards 2. draw the same number of the cards as the first time as the subsample 3. randomly draw a card from the subsample. 4. repeat 1000 times
What is the difference between probability and statistics? Are frequentist and Bayesian statistics?
Wow, the card examples are very clever. Great stuff!
I was so hoping that you would do a magic trick in the video!
I'm surprised that Bayesian Probability is about the different amount of information in our minds. The Prior belief and the updated belief after receiving new information really surprises me. Never heard of this explanation before. In class, the professor in uni and teachers in high school just taught us the equations.
Thanks!!!
In the abstract, every chess position is winning, losing, or drawn. But in practice, the amount of computation power needed to decide which scenario it is and what the best moves are is beyond prohibitive. In theory you could just click the "best move" button for days and it would be boring, but at the moment, we can't do that, we can just make AI that crushes humans the vast majority of the time (with the occasional super-strong player stumbling across a blind spot once in a blue moon)
So in practice, yeah, you end up branching into families of opening moves (theory) which are unknown whether to be "won" or "lost" or "drawn", and if you have some rough idea which positions you and your opponent understand better… it starts to get rather head-gamey and probabilistic. Or just modeling player behavior as essentially random with a sort of quirky and unknowable "algorithm" makes the outcome sort of random from that point of view too. It actually looks kind of hard to make an AI make "human" mistakes. To some AIs, it seems like they view a bad move as just a bad move, and blundering a piece in one move for poor compensation might be "no worse" than blundering a three-move tactic or something, but the lowest difficulty-level AIs don't even recapture losing trades half the time, which… is just not human at all.
Also, chess is just hard. lulz.
I suspect that more skilled chess players will more accurately predict the odds of success of each move even if they don't explicitly think in terms of probability.
I have been to probability conferences etc and people say things like "You don't look like a Bayesian". In 2013 the English Appeal Court has actually expressed the opinion in the famous Sally Clark case that Bayesian reasoning is false. Here is a link to a paper which explains the background: https://www.gotohaggstrom.com/Fooling%20juries%20with%20statistics.pdf
The judges made the following comments:
”The chances of something happening in the future may be expressed
in terms of percentage. Epidemiological evidence may enable doctors
to say that on average smokers increase their risk of lung cancer by
X%. But you cannot properly say that there is a 25 per cent chance that
something has happened: Hotson v East Berkshire Health Authority
[1987] AC 750. Either it has or it has not. ”
These are actually the frequentist and Bayesian approaches to STATISTICS. Probability has no paradigms, its approach is through measure theory and its foundations are well established, which by the way, entail the "frequentist approach to statistics" (e.g. Central Limit Theorem, Law of Large Numbers, Ergodic type Theorems).
It's a good time to recommend Keynes's book on the topic.
I really dislike describing Bayesian probabilities as "beliefs", because beliefs need not be rational, need not be coherent, etc. It is more accurate to call them epistemic probabilities — a measure of how certain you should be that a statement is true, given the information available to you.
The part of the discussion of Bayesian probability where you can see the card but I can't see it yet, then the probability flips to 0 or 1 once I see it, strongly reminds me of the Schrödinger's cat paradox in Quantum Mechanics.
Amazing video! In the Bayesian chapter you could have explored the scenario where you reveal the information "it's a red card" and see how the probability of the card being a heart grows to 50% 🙂