Cool, this is excellent. AOL embedded TCL interpreters attached to a listener on the network for all the services, so you could telnet in and see in memory what was going on and change config and what not. A tcl command is just a CS 1 program with argv and argc, so even pretty junior people could safely add commands. The framework was event based, no threads, so while the tcl was running nothing else was being done, so you could rearrange memory and it was cool as long as you finished before the end of the command. If would be pretty simple to add in a control port thing to Go. The advantage of TCL with its simple syntax is most commands are just cmd arg1 arg2 arg3 but if you have some general logic, you can express that as.
This is horrifying. Thanks for the nightmare fuel.
> while the tcl was running nothing else was being done, so you could rearrange memory and it was cool as long as you finished before the end of the command
My mind immediately springs to all the ways this can go horrifically wrong.
Tcl uses a memory model internally where each piece of memory belongs strictly to the thread that allocated it, with this enforced in lots of places. There are a few loopholes past it (for process-wide concepts such as the current working directory cache, or for inter-thread messaging) but by and large you write what appears to be single-threaded code.
There's support for deep coroutines and non-blocking I/O so it isn't limiting. You only really need multiple threads when dealing with compute-heavy code or a particularly crufty API.
What server are you talking about? I distinctly remember naviserver/aolserver being multithreaded and being able to share information between threads using a specific set of nsv_* commands.
SAPI - the Server API C stack. AOL server was a web surfer purchased from some small innovative company. It did also have TCL but was threaded. Way harder to extend then the No Threads Kernel that back-ended all the classic AOL clients and the AIM clients. I wrote the ns_flap.so shared library for the threaded AOLserver and ai maintained the non-threaded flap library ( a simple framing layer for TCP connections https://handwiki.org/wiki/OSCAR_protocol#FLAP_header ).
It was desgin3d to be easily horizontally scalable, and to be easy to write code for that was stable. We would put one process per CPU, and leave one for the OS. Also would put in one Ethernet card per process, if it was one that talked to the internet, and then used a user-land TCP stack on top of DLPI (?).
The services there were all high volume but not super latency sensitive, so. A poll/select loop that spun a thousand times a second or so was fine. You could add or reload all kinds of config without restarting and hence without losing any traffic. It was designed for zero downtime maintenance.
They weren’t, mostly, web servers, but handled various proprietary network formats from the proprietary clients and amongst themselves. Point to point mesh connect for AIM, hub and spoke for AOL, smart clients that could route to the appropriate endpoint based on the message contents. AIM had a “reconnect here” host to client message so you could dynamically move load around for maintenance. It was a type of select or pool loop like libuv, but before we had more than a few hundred hosts to interconnect. He entire internet and all client traffic came thru one file descriptor for the dedicated Ethernet for that process. It was all straight C, timers, Various layers of network protocols (tcp, ssl, flap, etc. the AOL client servers had lots of international localizations, which could be reloaded via a TCL command. You could see histograms of al the network latencies, at various levels, in a TCL command. I think at some point we added times to the prefix of allocated memory and had histograms on memory lifetimes. (Mostly we only allocated large chunks of memory, then at steady state would shuffle stuff of of a free list.
There were a few HTTP servers written for things like AOLTV and as an RPC gateway from third parties into the “host complex” e.g. one called ewoks was used for user registration. It was headless, only gave out 302 replies or 4xx replies.