You no longer have the console as the primary interface, but a GUI, which 99.9+% of computer users control via a mouse.
You no longer have the screen as the primary interface, but an AUI, which 99.9+% of computer users control via a headset, earbuds, or a microphone and speaker pair.
You mostly speak and listen to other humans, and if you're not reading something they've written, you could have it read to you in order to detach from the screen or paper.
You'll talk with your computer while in the car, while walking, or while sitting in the office.
An LLM makes the computer understand you, and it allows you to understand the computer.
Even if you use smart glasses, you'll mostly talk to the computer generating the displayed results, and it will probably also talk to you, adding information to the displayed results. It's LLMs that enable this.
Just don't focus too much on whether the LLM knows how high Mount Kilimanjaro is; its knowledge of that fact is simply a hint that it can properly handle language.
Still, it's remarkable how useful they are at analyzing things.
LLMs have a bright future ahead, or whatever technology succeeds them.
I don’t even argue that they might get useful at some point, but when I point a mouse at a button and press the button it usually results in a reliable action.
When I use the LLM (I have so far tried: Claude, ChatGPT, DeepSeek, Mistral) it does something but that something usually isn’t what I want (~the linked tweet).
Prompting, studying and understanding the result and then cleaning up the mess for the low price of an expensive monthly sub leaves me with worse results than if I did the thing myself, usually takes longer and often leaves me with subtle bugs I’m genuinely afraid of growing into exploitable vulnerabilities.
Using it strictly as a rubber duck is neat but also largely pointless.
Since other people are getting something out of the tech, I’ll just assume that the hammer doesn’t fit my nails.
These are the beginnings and it will only improve. The premise is "I genuinely don't understand why some people are still bullish about LLMs", which I just can't share.
When the mouse and GUI was invented nobody needed to say "just wait a couple years for it to improve and you'll understand why it's useful, until then please give me money". The benefits are immediately obvious and improve the experience for practically every computer user.
LLMs are very useful for some (mostly linguistic) tasks, but the areas where they're actually reliable enough to provide more value than just doing it yourself are narrow. But companies really need this tech to be profitable and so they try to make people use it for as many things as possible and shove it in everyone's face[0] in hopes that someone finds a use-case where the benefits are indeed immediately obvious and revolutionary.
[0] For example my dad's new Android phone by default opens a Gemini AI assistant when you hold the power button and it took me minutes of googling to figure out how to make it turn off the damn thing. Whoever at Google thought that this would make people like AI more is in the wrong profession.
It's like a mouse that some variable proportion of the time pretends it's moved the cursor and clicked a button, but actually it hasn't and you have to put a lot of work in to find out whether it did or didn't do what you expected.
It used to be annoying enough just having to clean the trackball, but at least you knew when it wasn't working.
You no longer have the console as the primary interface, but a GUI, which 99.9+% of computer users control via a mouse.
You no longer have the screen as the primary interface, but an AUI, which 99.9+% of computer users control via a headset, earbuds, or a microphone and speaker pair.
You mostly speak and listen to other humans, and if you're not reading something they've written, you could have it read to you in order to detach from the screen or paper.
You'll talk with your computer while in the car, while walking, or while sitting in the office.
An LLM makes the computer understand you, and it allows you to understand the computer.
Even if you use smart glasses, you'll mostly talk to the computer generating the displayed results, and it will probably also talk to you, adding information to the displayed results. It's LLMs that enable this.
Just don't focus too much on whether the LLM knows how high Mount Kilimanjaro is; its knowledge of that fact is simply a hint that it can properly handle language.
Still, it's remarkable how useful they are at analyzing things.
LLMs have a bright future ahead, or whatever technology succeeds them.