Topics in Probability 
Topic Tree  Home 
Pascal's Wager 
Bernoulli Trials 
Introduction
Background
The Pensees
The Wager "Let us examine this point and declare: 'Either God exists, or He does not.' To view shall we incline? Reason cannot decide for us one way or the other: we are separated by an infinite gulf. At the extremity of this infinite distance a game is in progress, where either heads or tails may turn up. What will you wager? According to reason you cannot bet either way; according to reason you can defend neither proposition …. 'Both are wrong. The right thing is not to wager at all.' Yes, but a bet must be laid. There is no option: you have joined the game."
Of course, Pascal's Wager is actually a mathematical proof that shows it is wiser to believe in God. Mathematically speaking, Pascal's Wager goes as follows:
On the other hand, if there is not God and you do not believe in God, you may gain a little but not more than, say, 1 "felz" of happiness. However, if there is a God and you do not choose to believe in Him, then you will not get anything good out of your not believing in Him. The mathematical expectation of not believing in God is thus less than:
Since 1+e > 1e, and since, other things being equal, a wise person will act in such a way as to maximize the expectation of their happiness, it follows that, other things being equal, a wise person will choose to believe in God.
Conclusion

Contributed by Chuck Hammond 
References:


Boy? Girl? Heads? Tails? Win? Lose? Do any of these sound familiar? When there is the possibility of only two outcomes occuring during any single event, it is called a Bernoulli Trial. Jakob Bernoulli, a profound mathematician of the late 1600s, from a family of mathematicians, spent 20 years of his life studying probability. During this study, he arrived at an equation that calculates probability in a Bernoulli Trial. His proofs are published in his 1713 book Ars Conjectandi (Art of Conjecturing). To be considered a Bernoulli trial, an experiment must meet each of three criteria:
P = Probability; m = # of successes; n = #of failures; p = probability of S(Success); 1  p= probability of F(Failures)
In order to determine the probability of an event occurring, one must know either the number of trials that wilt be completed or the number of successes and failures, which will give you the total number of trials. For this example, we will use 4. Say we want to know the probability of a coin landing, in no particular
order, headsup 3 times and tailsup only once with heads as success and tails as failures, the equation is as follow: Therefore, the chance of flipping 3 heads and 1 tail is .25, or 25%. Because the probability of certain events is not always known, Bernoulli arrived at a way of figuring the probability of S if p is not known. By using his theory of Large Numbers, which states, “The more observations one made of a given situation, the better one would be able to predict future occurrences, (Katz, 1998), Bernoulli devised a formula to find an equivalence to p: X/N ; when X is the number of successes and N is a large number of trials. Also, by theory, the greater the number of trials performed, the closer one comes to the true probability of an event occurring. But, due to the nature of chance, this is not always true. Notice, that as the ratio of success (or failures) to the number of trials becomes closer to its own probability, the probability of that number of success and failures increases, closer to the fixed probabilities. An example is as follows: You have a balanced coin, and will flip it 4 times. The desired outcome is 2 heads(successes) and 2 tails(failures). The probability is closer to the fixed probabilities, though it is not exactly the same.

Contributed by Lindsay Eastridge 
References:
