OK, let’s break this down bit by bit. We may be finding common ground.
Anyways, in order to deal 12 cards, first you deal one card, which can be done by simulating a U(52). When you are going to deal the second card, the first card has already been dealt, so you have to deal one of the 51 cards remaining, which means you would (usually) want to simulate a U(51). After that, you will have 50 cards remaining when you want to deal the third card, so you will want a U(50), etc. at some point you will have dealt 9 of the 12 cards you want to deal, so you will want a U(43) in order to deal the tenth. After you have dealt 12 you can stop (assuming there are 2 players and 3 burn cards. You don’t really need burn cards, but whatever.) 43 is just an example. If you are going to deal Y cards, you will want to simulate a U(52-X) for every number X in {0,1,…,Y-1}.
The useless slider would indeed be deceptive, but it would also be completely undetectable. Whether a slider is useless or does what it claims, the stochastic behavior of the system would be identical.
I do not know what you think Replay does that is not fair. Site policies etc. were not really the topic. I was just discussing dealing cards.
Not sure what you mean by the difference between computer poker and online poker. From the perspective of dealing cards they are the same exact problem. There is a difference in the sense of fairness to all players I guess, because if you are playing against a CPU it really doesn’t matter whether it is fair or not.
The number 1 difference between internet and physical poker is that you can’t look at the people you are playing with. This is the most important difference in every possible sense of the word. This is why it feels different, and this is why it changes what the game is like. The algorithm for shuffling is MUCH less significant than this one simple fact.
As to why people always claim online poker is rigged, it has to do mostly with people’s general complete lack of understanding of randomness. When playing live, they don’t usually get to think about hands in blocks of 20, because each hand takes a while and there is much more going on than the cards. When playing online 20 hands can take about 15 minutes, and you can get a much better look at what happens long term. People are - in general - very bad at recognizing long term randomness. The human mind loves to see patterns in things, and things that behave a lot like patterns will show up in truly random data all the time. This is why formal statistical tests are so important: There is no level of training you can give someone that will allow them to just look at data and accurately decide whether there is a trend or not.
I am no less suceptible to this than anyone else. Here is a pretty ridiculous (and shameful) example from about 2 months ago: I had an alternative test which I was checking for accuracy in predicting diabetes. I had the results of the standard test side by side with the alternative test. When I looked through them I was thinking “well, it obviously isn’t quite as good, but usually it is mostly in the same ballpark. This seems to be an OK alternative.” Then I realized I had made a mistake: I had paired up the test for patient 1 with the data for patient 2, the test for patient 2 with the data for patient 3, etc. It was all completely bogus since I wasn’t even comparing to the right patient! Yet without actual testing my intuition had said that it was accurate. There was no pattern at all, but after years of training and looking at data my brain still insisted on seeing a pattern anyway.
The point I am trying to make is that speaking from a purely practical perspective, there is a level of randomness and fairness that you can probably call “good enough” and from that point on the other problems with the system will be much more important than any problem with the randomization. In physical shuffling and also in randomization using the MT, we are doing way better than this “good enough” level.
Now as for the last question, I have a little trouble understanding it. As stated I get the feeling you are using terms that you don’t know the meaning of. That’s fine , they are technical terms from a field you don’t specialize in. If I had to ask technical questions about chemistry I would probably do much worse.
I will try to answer what I think you MEAN rather than what you actually SAID:
When you draw random cards, this thread has shown two different alternatives for how to do it:
The first is that you first draw a card from a set of 52. Then you draw a second card from the 51 remaining. Next you draw a third card from the 50 remaining, and so on.
The second alternative is that you always draw from a set of 52. The issue with this one is that if your first draw was the two of hearts, your second draw might also be the two of hearts. You can’t, of course, deal the same card twice, so the solution is that you have to draw again until you get a card that hasn’t already been dealt.
In my earlier post I wrote a 2 line proof that these two methods have the exact same behavior. The probability of being dealt a certain card at a certain turn is the same with either algorithm, so it doesn’t matter. That said, the second algorithm is much worse than the first because with the first one you only have to summon the RNG once per card, and in the second one you might have to summon the RNG many more times (there is no bound; you might just keep drawing the two of hearts over and over again forever, although the probability of this is vanishingly small).