Complete Tests:
Test Results by Site:
Online Poker Watchdog has carried out 'bad beat' tests on Party Poker hands. We started by getting a sample of 1.26 million Party Poker hands at a very good price from Poker Hand Scout* and importing to Poker Tracker*.
1) We separated those hands that were all-in pre-flop and looked at these hands from the perspective of the 'underdog' in every case ('all-in equity'<50%).
2) We calculated the difference between the number of hands that were expected to win in an ideal world, with the number of hands that actually won in reality - this is called the actual deviation.
If the actual deviation is positive it means that generally the underdog hands improved to win more often than expected - i.e. there were more bad beats than expected.
If the deal at Party Poker was fair the actual deviation should be very small.
3) But how small is very small? To answer this question we calculated the standard deviation of the sample so that we could compare the actual deviation to it.
If the actual deviation is less than 2 times the standard deviation then this is strong evidence of no bias in the sample.
If the actual deviation is greater than 5 times the standard deviation then this is strong evidence of a bias in the sample.
Sometimes results will occur where the actual deviation is slightly greater than 2 times the standard deviation (and occasionally greater than 3 standard deviations). This can occur from time to time due to variance. It does not necessarily mean that a bias has been found, although it may be grounds to run further tests on a particular poker site.
This bad beat test has been repeated for pre-flop dominated hands ('all-in equity' from 17-32%) and for hands that were all-in on the flop:
Data Group | Actual Deviation | Standard Deviation | Actual Deviation/Standard Deviation |
---|---|---|---|
Pre-flop: underdog | -13.8 | 48.3 | -0.28 |
Pre-flop: dominated | -7.9 | 32.3 | -0.24 |
Flop: underdog | +90.1 | 41.6 | +2.17 |
Turn: underdog | +2.0 | 32.1 | +0.06 |
From the table above we can see that the actual deviation is within two standard deviations in both pre-flop tests.
In the flop all-in test the sample fell just over 2 standard deviations from the expected result in favour of the underdog; 2.17 to be precise. From 12,193 flop all-in hands, the underdog improved to win 90 more times than expected. This result should occur due to variance in approximately 3% of tests (or roughly once in every 33 tests).
This result is within our margin for error of 2 standard deviations (see the discussion page for an explanation of errors).
Even without the margin for error it could easily be explained by variance and therefore cannot be seen as evidence of rigging - in fact it would be unusual if we didn't get a result like this occasionally. Hence, we must conclude that our alternative hypothesis is incorrect and that the Party Poker games tested were fair with respect to 'bad beats' in all cases.
For more information on interpreting deviation see this article.