The memory issue is Java. Not for event-based C. (Don't believe me, go to the language shootout and see Java vs. whatever and look at the memory use.)
Also what does "a million users" mean? A million open connections? How many requests active simultaneously?
One million simultaneous HTTP requests would sure eat a lot of memory, but who does it that way? It's stupid. The kernel handles socket buffers for you. Each processor core can only read one socket at a time. So as long as you don't take too much longer to respond requests you should be fine.
Plus with things like event based libraries you can just place a read on the socket for at least X size to "wait" till the basic of the request is fulfilled.
I don't know what they are trying to do, it all depends on what the server does. For pure messaging across connections it should be an order of magnitude better in C/libevent. But as I say, I need more info.
D'oh, from TFP:
This code (mochiconntest_web.erl) just accepts connections and uses chunked transfer to send an initial welcome message, and one message every 10 seconds to every client.
1000000 / 10 = 100000 writes per second to a client on a connected socket. My guesstimate is it's not that much :)
Also what does "a million users" mean? A million open connections? How many requests active simultaneously?
One million simultaneous HTTP requests would sure eat a lot of memory, but who does it that way? It's stupid. The kernel handles socket buffers for you. Each processor core can only read one socket at a time. So as long as you don't take too much longer to respond requests you should be fine.
Plus with things like event based libraries you can just place a read on the socket for at least X size to "wait" till the basic of the request is fulfilled.
I don't know what they are trying to do, it all depends on what the server does. For pure messaging across connections it should be an order of magnitude better in C/libevent. But as I say, I need more info.