Are they progressing quickly? Or was there a step-function leap about 2 years ago, and incremental improvements since then?
I tried using AI coding assistants. My longest stint was 4 months with Copilot. It sucked. At its best, it does the same job as IntelliSense but slower. Other times it insisted on trying to autofill 25 lines of nonsense I didn't ask for. All the time I saved using Copilot was lost debugging the garbage Copilot wrote.
Perplexity was nice to bounce plot ideas off of for a game I'm working on... until I kept asking for more and found that it'll only generate the same ~20ish ideas over and over, rephrased every time, and half the ideas are stupid.
The only use case that continues to pique my interest is Notion's AI summary tool. That seems like a genuinely useful application, though it remains to be seen if these sorts of "sidecar" services will justify their energy costs anytime soon.
Now, I ask: if these aren't the "right" use cases for LLMs, then what is, and why do these companies keep putting out products that aren't the "right" use case?
Thia might appear to ba a shallow answer but I do not think it is. AI has taken a very long road from early conceptions, by Turing and others, to a tool whose value we can argue about, but which is getting attention and use everywhere.
The mere fact that "are they progressing rapidly" is a question, is a testament to an incredible uptick in speed of progression.
"Is AI progressing quickly?" is the new "Are we there yet?"
have you tried it recently? o3-mini-high is really impressive. If you ease into talking to it about your intent and outlining the possible edge and corner cases it will write nuanced rust code 1000 lines at a time no problem
The use cases I list are all over the past 8 months. One of the things that drove me away from copilot and chatbots is that I just write better code faster than it can. I could sit there for an hour, fiddling with prompts, and copy-pasting output into a text editor, or I could just write the damn code.
I tried using AI coding assistants. My longest stint was 4 months with Copilot. It sucked. At its best, it does the same job as IntelliSense but slower. Other times it insisted on trying to autofill 25 lines of nonsense I didn't ask for. All the time I saved using Copilot was lost debugging the garbage Copilot wrote.
Perplexity was nice to bounce plot ideas off of for a game I'm working on... until I kept asking for more and found that it'll only generate the same ~20ish ideas over and over, rephrased every time, and half the ideas are stupid.
The only use case that continues to pique my interest is Notion's AI summary tool. That seems like a genuinely useful application, though it remains to be seen if these sorts of "sidecar" services will justify their energy costs anytime soon.
Now, I ask: if these aren't the "right" use cases for LLMs, then what is, and why do these companies keep putting out products that aren't the "right" use case?