Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Parrondo's Paradox (wikipedia.org)
69 points by dedalus on April 28, 2023 | hide | past | favorite | 48 comments


If you take two mutually independent events and somehow introduce a dependence between the two, then the events are no longer mutually independent and therefore by law of conditional probability, you can influence the probability of the outcome of the combination.

Corollary: If you take two mutually dependent events and somehow break the dependence between the two, then the events are no longer dependent and therefore by law of conditional probability and law of independent events, you can influence the probability of the outcome of the combination.

A practical example of the corollary would be password re-entry user interfaces during account registration.

The reason while re-entering the password the second time, we are not shown what we typed earlier is to make the reentry independent of the previous entry. Otherwise, we may look at what was typed earlier and subconsciously type the same thing -- which would be bad while we are doing account registration.


I think you are missing a "not" in your first paragraph. "You cannot"


You can (by introducing the dependency between previously independent events).


This seems like such a pointless semantic flex to me...

In this case has the game not become Game A + Game B ?

It's just a larger game with a distinct winning strategy because the ruleset is expanded right?

What's the significance?


I think the HN relevant use case would be looking at this from the opposite side. You've designed 2 games (or algorithms) and both result in a winning state. But when they are used alternately, they lead to worse outcomes. A made up, possibly bad example that's using similar 'rules':

Start with: 1 large fixed size data structure and 1 cache Algorithm A: Checks as it is iterating whether the cache is full. If not, it generates the cache data (SLOW) but can then iterate over the entire data structure quickly. First time it does this is a loss, but 2nd pass through the data structure leads to an overall win. Algorithm B: Prefers an empty cache. If cache is full it will delete it. It can iterate over the entire data structure fairly quickly. Each time is considered a win.

Now you have a program with many different features and everyone knows that it doesn't really matter if you use Algorithm A or B because they are both programmed to work safely together and if you test a feature using one of the algorithms it will be fast either way, so it's left up to personal preference. The fun begins when the full program starts alternating from algorithm A to B.


It's a mathematician discovering why we have integration tests.


Or, from a much more charitable angle:

> It's a physicist¹ proving that we need integration tests.

    ¹ Juan Parrondo is a physicist by training, not a mathematician.


Yeah, this seems pointless. Here's an example of the paradox:

1) Hitting both nails and screws with a hammer is a losing game.

2) Screwing both nails and screws with a screwdriver is a losing game.

Paradox alert! If you hammer the nails and screw the screws you've transformed two losing games into a winning game!


Or:

1) Wearing shorts and sandals all year round is a losing game (in this part of the world); you will freeze to death in the winter.

2) Wearing your warmest clothing all year round is a losing game; you will overheat in the summer.

This is a paradox, because the population of Canada should be zero. But wait! What if you just dress appropriately for the season?


And know imagine you didn't know you can switch the tools.


I think a closer-to-real example is the problem of playing a collection of blackjack tables. All the games are the same, but sometimes the state of the decks (ie, which cards have been discarded) will lead to better odds of winning. If you know the state of the decks at each table, you can always choose to play the table with the best odds of winning. This type of strategy has been used to win piles of money in Vegas, FWIW, though it leads to ejection if you're caught.

This is similar to the 'ratchet' examples in the wikipedia page - you play the game with the best odds, and use one game to 'cool off' until you're in the right state to win the second game again. The games in the wikipedia article are kinda unsatisfying, though - there's too much dependence on player state.


Market timing in the stock market (if you have a little insider info), and card counting in lotteries, are historical examples.


Sure but I want to imagine that some people are only playing Game A and others are only playing Game B, unaware of the relation between them that creates positive outcomes by sometimes losing in one game or the other.


Indeed, there's no way this is the product of actual game theory because it is using a pathologically incompatible definition of "game" where the player is allowed to change the rules.


Probability - and how you need to reason about independent vs dependent events - is poorly understood generally.

Let's try to make it HN-relevant by asking:

can two things that both AB tested positively in independent one-at-a-time testing possibly backfire if you launch both of them?


Later on it's revealed. The whole thing is indeed pointlessly daft:

> In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence.

In other words, this is a load of time-wasting BS, based on doing a stupid thing at the outset anyone thinking clearly sees right away.


Only if you realize that the game could be looked at as Game + Game B


Agree completely


I don't understand why it is obvious that for any games A and B, composition C has any relation to A or B.


Agree. Perhaps it is my unfamiliarity with this field but it does not appear that composition is a meaningful operation here.


I often wondered whether some old-school poker players are actually unintentionally doing a variant of this. If you analyze the play of someone like Daniel Negranu from a game theoretic perspective it's clear that lots of the things he does are just bad (blind limps, check in the dark on the flop etc)[1] but taken together over more than one hand they create situations which are positive ev by widening the ranges they might have in a particular spot and therefore increasing the advantage of information asymmetry (they know exactly what they have whereas the opponent only knows a now very wide range they could have).

[1] By which I mean these are strategies that are strictly dominated in the game-theory sense.


I think what you say might be true. I once found a dumbed down version which goes like this: (A simplified version of Mia [0])

On your turn, you secretly roll a die and then make a statement about what number you rolled. The next player may then call your bluff. If you lied, you lose, otherwise they lose. Or they may roll again, but must claim to have rolled something higher than whatever you rolled. If you don't lose, you win.

Suppose you have to explain your (mixed, i.e. involving randomness) strategy to other players before playing. This is a simulation of the realistic situation of playing many times and other players learning your strategy.

Let us examine the situation where you are given a 4 from the previous player and you decided to roll. (You would have to prove that this can be a correct move, but what follows is in principle true in any situation where you roll.)

If you roll a 5 or 6, you say so. But if you roll a lower number, you have to claim a 5 or 6 regardless. If you announce a 6, you lose instantly, because the next player has no chance of rolling higher, so they must call your bluff. Thus in a single game the correct move is to announce a 5.

But since the other players know your strategy, you can increase you chances of them believing you have a 5 if you sometimes claim a 6 and thereby forefeit the game. It is an easy calculation to see that this strategy yields more wins than always claiming a 5 in this situation.

I think this is analogue to the poker example: Play a clearly bad move and thus gain an advantage by being unpredictable. In my example it is not so hard to see that the advantage can in principle outweigh the cost of playing bad moves.

[0] https://en.wikipedia.org/wiki/Mia_(game)


I also thought of poker, but I'm not sure exactly if there's a connection here:

https://en.wikipedia.org/wiki/Morton%27s_theorem


I'm somewhat skeptical any time anyone references "the fundamental theorem of poker" (like that wiki page), given that Sklansky definitely doesn't understand game theory and the "fundamental theorem" is just wrong. This Morton's theorem shows one example of that.

For people who don't know the so-called fundamental theorem of poker[1] was proposed by David Sklansky who was definitely more mathematically-minded than most poker players of his generation but was not really a mathematician. It's gained a lot of mindshare even though it's clearly at odds with game theory. Probably this is because it's very simple to state and seems intuitive. It is usually stated as something like "Every time you play a hand differently than you would have if you could see all your opponent's cards you lose and they gain and vice versa, and conversely every time they play a hand differently from how they would if they could see all your cards you gain and they lose and vice versa".

This is one of those things that sounds sort of obviously correct and so is very seductive as an idea, but it completely fails if you just think for a second about the fact that you can't see their cards and they can't see your cards. In that world (actual poker), we know that the optimal strategy is a Nash equilibrium in mixed strategies(we know this is true for all games of hidden information). By definition that means you are sometimes doing something different from "what you would do if you could see their cards", because you have a mixed strategy (ie you don't do exactly the same thing every time but choose options with some randomness). The same is true for the opponent. Therefore we can see that it follows trivially that the "fundamental theorem" can't be correct.

[1] https://en.wikipedia.org/wiki/Fundamental_theorem_of_poker


I mean isn’t that the whole point of poker? If you play strictly rationally, you become predictable and easy to beat, which is why machines are bad at it.


Machines are very good at poker, and there is no way to exploit rational play if executed perfectly.


Started reading the examples and my eyes glazed over. Someone have a better example?


Of the examples given, game A was always a losing game, and game B was basically two games (I'm calling these sub-games) duct taped together, where one is a winning game, and the other is a game that loses very badly. This causes negative expected value across all of game B.

The important parts here are that in game A you lose slower than in game B's losing sub-game, and game B's sub-games switch depending on the resources you win/lose in game A.

The strategy is to play game A until you hit the conditions for game B's winning sub-game to kick in, then play game B until it swaps back over again, then go back to A and repeat the process.

Thus two games where each have negative expected value, can be daisy chained to produce positive expected value.


Sometimes Wikipedia reads like it is written by that math teacher who “just gets math” and teaches in a way that the only people who will understand … are people who already do.

Granted maybe this article is good and just beyond me, but it is disappointing how much Wikipedia is like that.


It's a reference encyclopedia, not a text book, for better or for worse.


In my experience almost in every topic it is way better textbook than any textbook. Simply because Wikipedia has the right scope, and the right amount of content. Wikipedia can, and almost always do talk about anything in math in 3 different levels, and tell everything related (including history) about everything. Textbooks can't do that.

To be said, this article is not graspable at first for me either. There is the "even simpler example"[0], but I am not even sure if it is a right example (different games can share a state, the amount of money), or just an analogy.

[0] : https://en.wikipedia.org/wiki/Parrondo's_paradox#The_simple_...


Further down the article there's a simpler example: https://en.wikipedia.org/wiki/Parrondo%27s_paradox#A_simplif...


And now somebody edited it to put the simple example first.

I was really confused how anyone could glaze over with such a concise description, but they saw something different than I did.


Breaking our links, nice move!

Unrelated, but I always want to link permanent wiki articles, but there is no culture for it, and I am afraid it would annoy people because it is uncommon.


The bit that helped me was here:

> The role of M now comes into sharp focus. It serves solely to induce a dependence between Games A and B, so that a player is more likely to enter states in which Game B has a positive expectation, allowing it to overcome the losses from Game A. With this understanding, the paradox resolves itself: The individual games are losing only under a distribution that differs from that which is actually encountered when playing the compound game. In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence.

Trying to rephrase: if you combine systems in a way that the rules of the combination itself serves to "manipulate" the individual conditions of the two systems, then you can get positive outcomes from two things that, individually, would give negative outcomes if not otherwise manipulated. The example with the even simpler "In Game B, you count how much money you have left — if it is an even number you win $3, otherwise you lose $5." (compared to the game where the above quote came from) really sums that up - if you know your starting point you can set up the sequence of those two games to steadily win.


Your comment made me laugh. I feel like that often. Luckily, I read the simple example first :D


Recalls a vulgar job interview joke. The candidate claims that he can detect prostate cancer with his finger for, say, $1k.

The interviewer (who knows he has a positive diagnosis), sensing easy money, plays along, and an exam ensues on the spot.

The candidate pronounces that the interviewer is clear. The interviewer produces the diagnosis, and demands to be paid.

The candidate shrugs and produces the money. The interviewer notes the sanguine candidate and asks if this is the usual ending.

The candidate laughs and says that he had a $10k bet with the interviewer's competitor that he'd have his finger up the interviewer's backside in under an hour.

In summary, the paradox in The Famous Article seems to boil down to https://en.wikipedia.org/wiki/Arbitrage


I don’t really understand how this is a paradox, but it’s definitely surprising and non intuitive.

It seems like if you have 2 games A and B, the second you start playing them together you’ve effectively created a new game C, which is a game of A and B combined.


> Is Parrondo's paradox really a "paradox"? This question is sometimes asked by mathematicians, whereas physicists usually don't worry about such things. The first thing to point out is that "Parrondo's paradox" is just a name, just like the "Braess's paradox" or "Simpson's paradox." Secondly, as is the case with most of these named paradoxes they are all really apparent paradoxes. People drop the word "apparent" in these cases as it is a mouthful, and it is obvious anyway. So no one claims these are paradoxes in the strict sense. In the wide sense, a paradox is simply something that is counterintuitive. Parrondo's games certainly are counterintuitive—at least until you have intensively studied them for a few months. The truth is we still keep finding new surprising things to delight us, as we research these games. I have had one mathematician complain that the games always were obvious to him and hence we should not use the word "paradox." He is either a genius or never really understood it in the first place. In either case, it is not worth arguing with people like that.


> I don’t really understand how this is a paradox,

You do:

> but it’s definitely surprising and non intuitive.

That's a common definition of a paradox: "a seemingly absurd or self-contradictory statement or proposition that when investigated or explained may prove to be well founded or true."


Ok I misunderstood what it means to be a paradox, this does seem to apply. The point I was intending to make is that the composition of 2 games entails a new game which creates an entirely new ruleset and therefore new potential outcomes. On the spectrum of paradoxes this one doesn’t feel particularly profound, but perhaps it’s due to the examples being especially contrived.


Exactly. Of course you can rig up a set of game rules that lead to a specified outcome under specified conditions. This paradox just boils down to a game rule that says "Alternately play these two subgames." I don't get it.


It sounds counterintuitive, but it's not that hard to come up with an example.

Suppose you have a game where your score is A*B. The strategy to only increase A or only increase B are losing ones, but combining them gives a winning strategy.


It's not really a paradox. Once you start playing Game A + Game B, then it becomes Game C (a new game entirely) which resembles the original two games but now has a different rule for winning.


I've always found this paradox interesting, as it was discovered fairly recently (1996) compared to other paradoxes in math.


It's because the paradox relies on an unintuitive "mistake in your homework" premise that defines things in an unhelpful way. ( Game A+B+Rule C has different outcomes from Game A and Game B is obvious. You have to mistakenly hide Game C from your mind in order to make a paradox. It's a word game, mis-describing what is happening before doing it.

The resolution is "stop hitting yourself".


I'm not sure your statement is doing it justice.

The idea that a combination of two losing things can be a winning thing specifically doesn't seem intuitive. Lots of game-theoretic analyses involve decomposing a complex game into different situatons and trying to solve each situation seperately. Especially for hidden information games with multiple rounds (eg Poker or card games generally). The assumption underlying this approach is by combining winning strategies together you can approximate an overall optimal solution breaks down in the face of this given that you might need to include all combinations of losing things also since those losing strategies may combine to form a winning strategy overall.


Life itself is a loosing game:-)

We die at the end and all we've accumulated doesn't worth a thing. Game over.

But paradoxically, by being alive, playfull and involved in playing the loosing game of life we win... moment after moment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: