Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The memory issue is Java. Not for event-based C. (Don't believe me, go to the language shootout and see Java vs. whatever and look at the memory use.)

Also what does "a million users" mean? A million open connections? How many requests active simultaneously?

One million simultaneous HTTP requests would sure eat a lot of memory, but who does it that way? It's stupid. The kernel handles socket buffers for you. Each processor core can only read one socket at a time. So as long as you don't take too much longer to respond requests you should be fine.

Plus with things like event based libraries you can just place a read on the socket for at least X size to "wait" till the basic of the request is fulfilled.

I don't know what they are trying to do, it all depends on what the server does. For pure messaging across connections it should be an order of magnitude better in C/libevent. But as I say, I need more info.



Yes java adds some overhead, but you still have the low level buffers.

A million open connections we're talking about. All active, connected.

Agreed, c/assembly would obviously reduce the memory footprint. But is the extra work worth it? I'd say not for most apps.


D'oh, from TFP: This code (mochiconntest_web.erl) just accepts connections and uses chunked transfer to send an initial welcome message, and one message every 10 seconds to every client.

1000000 / 10 = 100000 writes per second to a client on a connected socket. My guesstimate is it's not that much :)

Based on this graph: http://monkey.org/~provos/libevent/libevent-benchmark2.jpg

[BTW sorry for replying to my own comment, and sorry I didn't read TFA.]




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: