Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not really... especially if this is an in-memory database attached to a network, as implied.

It’s only slower if someone can observe the difference, which I don’t think they would be able to in this design.

I’m a strong proponent of using fewer, larger machines and services, instead of incurring the overhead involved in spreading things out into a million microservices on a million machines. But there is a balance to be achieved, and beyond a certain point... synthetic improvements in performance don’t show up in the real world.

Queuing up a few database requests concurrently to make up for the overhead of literally hundreds of nanoseconds of latency is trivial, especially when Optane can service those requests concurrently, unlike a spinning hard drive. Applications running on other machines won’t be able to tell a difference.

But, agree to disagree.

There are probably applications where these mega machines are useful, but I don’t personally find this to be a compelling example.

I readily admit that I could be wrong... but neither of us have numbers in front of us showing a compelling reason for a company to spend unbelievable amounts of money on a single machine. My experiences (limited compared to many, I’m sure) tell me this isn’t the winning scenario, though.



> I readily admit that I could be wrong... but neither of us have numbers in front of us showing a compelling reason for a company to spend unbelievable amounts of money on a single machine.

$500k on a machine isn't a lot of money compared to engineers. Even if you buy 4 of them for test / staging / 2xProduction, its not a lot compared to the amount spent on programming.


One thing being expensive doesn’t make another thing not expensive.

It’s possible for them to both be independently expensive, and I’m saying that unless you can prove that the performance difference makes any difference to company profits, it is literally a huge waste of company money to buy those expensive machines.

A lot of applications will actually perform worse in NUMA environments, so you’re spending more money to get worse performance.

Reality isn’t as simple as “throw unlimited money at Intel to save engineering time.” Intel wishes it was.

Engineering effort will be expended either way. It is worth finding the right solution, rather than the most expensive solution. Especially since that most expensive solution is likely to come with major problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: