Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At its heart, Monte Carlo is just trying something a bunch of times and seeing what happens, right1?


My wife and I used Monte Carlo to buy our second house. We were trying to sell a house, buy a new one, juggle a couple of other transactions, predict interest rates, all while trying to understand our new budget (we were newly together and hadn't lived together yet at that time).

So, there were a bunch of different variables that led to a final scenario of how big our final monthly payment would be. We were both conservative in our choices, too.

The trick with Monte Carlo is that if we had been conservative on our predictions on every single variable, I don't think we ever would have bought our house.

But by doing a Monte Carlo simulation - assuming each variable was roughly independent of the others, and picking a sane-seeming variance for each prediction - we were able to simulate 10,000 scenarios, and then pick the numbers that led to us meeting our budget in 95% of cases. They key number there was the "asking price" of the house we were looking for.

That asking price - that was safe in 95% of scenarios - was much higher than it would have been had we been conservative with every single variable.

And it worked great. The conservatism of the 95% estimate gave us some wiggle room, a couple of the other variables broke our way, and our combined monthly budget still allowed us to save more money than when we were living separately.


I'd love to have a web-app for that


Me too. Or a layman's level worked write-up.


Monte Carlo has been called "integration by darts." This is a play on words of the better-known "integration by parts" method of undergraduate calculus.


Indeed. A classic (and simple) Monte Carlo procedure lets you estimate pi by essentially throwing darts.

1) Draw a unit square.

2) Inscribe a circle inside the square.

3) Throw a whole bunch of darts at the square, distributing them as randomly as possible.

4) Count the number of dart holes inside the circle. Divide by the total number of dart holes. Now you have an estimate of the ratio of the areas of the circle and the square. Call this number C. Since you know the area of the square is 1, the area of the circle must be C. C is also equal to pi * r^2, where r is the radius of the inscribed circle (which is 0.5). Thus, C = pi * (0.5)^2 = pi/4. Pi is therefore approximately 4C.


How do you figure out how many holes are inside the circle if you don't already know pi, though? I mean, a human could do it, but I don't follow how this extrapolates to a mathematical process.


Check the distance from the center to the dart?


Ok, that makes sense.


At its heart, yes. But that is also one of the basic tenets of probability theory in general, since the very beginnings. Called the law of large numbers. Monte Carlo usually refers to situations where "trying something a bunch of times" is not a straight forward procedure, because some "somethings" are more interesting than others and there is also a question of how large the "bunch" of times should be to see the true picture.


I think you create a markov chain and from there see if the chain end up in a steady state somewhere.

Bayesian use MCMC to indirectly get the joint distribution without actually doing integrating by sampling the distribution and getting the average from the chain. You start anywhere reasonable or guess where to start from the chain so you usually burn the first 10-15% of the chain and get the average of the chain.

You can have multiple chains in the simulation each chain represent a parameter estimate.

edit:

The reason why Bayesian use MCMC is because integration is hard and each model have their own different integration problem. If you choose to model salmon migration you may have your own take on the model and in the end you have this nasty integration for that take of your model. If you change your model you have a new integration problem... You can try to integrate it or you can just MCMC it and by pass integration. Instead of integrating to get the PDF you can just sample from the joint distribution and estimate it (the parameters of the distributions) from the sample.


This is what I've always believed. Happy to be corrected or refuted!

You're looking at something which has unknowns to its shape. You do not have the luxury of an exhaustive test of all variances in the input, or model. You need an optimisation which explores the space, and allows you to intuit refinements to the model.

In some cases, you have the experimental data. So you are looking for a decision-logic over which part(s) to use, and how to interpret them.

You test with a random selection of inputs, models, conditions and see how they cluster. The questions about what to do, would be "how many" and "how random-y"


I guess pared down to its absolute core, this is what Monte Carlo is - you just generate many a large ensemble of possible states.

But this simplified explanation misses out on one key aspect of Monte Carlo: sometimes different kinds of Monte Carlo moves can be designed that can allow it to more efficiently sample the phase space than other methods such as gradient descent.

Unfortunately, doing so is can be very involved, and is not always very general, so it isn't as easy to do as using other methods for exploring phase space.


Your answer feels like its saying "its only partly random, you use the outcome of your experiments to walk the space of possible solutions more carefully, in the light of experience" which would make it more like GLMM or something markov-y




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: