I think the HN relevant use case would be looking at this from the opposite side. You've designed 2 games (or algorithms) and both result in a winning state. But when they are used alternately, they lead to worse outcomes. A made up, possibly bad example that's using similar 'rules':
Start with: 1 large fixed size data structure and 1 cache
Algorithm A: Checks as it is iterating whether the cache is full. If not, it generates the cache data (SLOW) but can then iterate over the entire data structure quickly. First time it does this is a loss, but 2nd pass through the data structure leads to an overall win.
Algorithm B: Prefers an empty cache. If cache is full it will delete it. It can iterate over the entire data structure fairly quickly. Each time is considered a win.
Now you have a program with many different features and everyone knows that it doesn't really matter if you use Algorithm A or B because they are both programmed to work safely together and if you test a feature using one of the algorithms it will be fast either way, so it's left up to personal preference. The fun begins when the full program starts alternating from algorithm A to B.
I think a closer-to-real example is the problem of playing a collection of blackjack tables. All the games are the same, but sometimes the state of the decks (ie, which cards have been discarded) will lead to better odds of winning. If you know the state of the decks at each table, you can always choose to play the table with the best odds of winning. This type of strategy has been used to win piles of money in Vegas, FWIW, though it leads to ejection if you're caught.
This is similar to the 'ratchet' examples in the wikipedia page - you play the game with the best odds, and use one game to 'cool off' until you're in the right state to win the second game again. The games in the wikipedia article are kinda unsatisfying, though - there's too much dependence on player state.
Sure but I want to imagine that some people are only playing Game A and others are only playing Game B, unaware of the relation between them that creates positive outcomes by sometimes losing in one game or the other.
Indeed, there's no way this is the product of actual game theory because it is using a pathologically incompatible definition of "game" where the player is allowed to change the rules.
Later on it's revealed. The whole thing is indeed pointlessly daft:
> In summary, Parrondo's paradox is an example of how dependence can wreak havoc with probabilistic computations made under a naive assumption of independence.
In other words, this is a load of time-wasting BS, based on doing a stupid thing at the outset anyone thinking clearly sees right away.
In this case has the game not become Game A + Game B ?
It's just a larger game with a distinct winning strategy because the ruleset is expanded right?
What's the significance?