Hacker Newsnew | past | comments | ask | show | jobs | submit | digitaltrees's commentslogin

What do you mean if? The internet is built on html. Markdown is a random flare up of an annoying annotation standard pushed by LLMs. No one used Markdown anywhere near at the scale of html before ChatGPT

So you write a HTML Readme? Markdown is what I write 99% of the time. Unless I am in a rich editor and HTML behind the scenes.

HTML is a communication layer. Hardly anyone is writing it directly. Using React etc. or whatever or even LLM.


Thats a complete reductive idea of MDs. MD was long spawn before LLMs as a need for light weight formatting and easy conversion to different formats. No one used at that scale doesnt mean the absence of the utility for it, as the primary adopters were tech community and that was a layer below the consumer market. Utility obviously stands, the question is, can HTML supersede on that utility side of it in all developer work components.

lmao

I think a lot of people are excited at what feels like insane new velocity and tempted to ignore the hard learned lessons of good code vs bad code.

Here is a real conversation. “I built this app so fast it feels amazing, but then I looked at the code and it had a 6000 line class with one function that was 3000 lines of if statements”

“Oh ya that’s bad. You definitely need to refactor that”

“I thought that, but I wonder if it’s actually better to have a big class in a single file because that’s easier for the AI to understand than if it was in multiple files”

“Umm ok but do you even understand the 3000 line function? Couldn’t that be broken into better code that if/else soup?”

That conversation went on like that for a while.

Meanwhile, I have settled on a process where I built a framework that has good architecture built in and my version of using AI is essentially enforcing compliance with my architecture and coding patterns.

When cursor moved to an Agent view to remove human review I built my own IDE to ensure I never have to adopt stupid coding practices. I used AI to build it and had to constantly stop the AI from doing stupid stuff.

I am happy to share patterns and tools with you because AI can be a massive accelerator and produce good code when managed effectively but it requires a commitment to good code and willingness to ignore where the industry hype is right now.


Having used AI to write code, and seen the bs it outputs half the time, any org speed running to a parallel autonomous unreviewed code base is going to get hit with a massive rude awakening when their cluster f of a codebase melts down.

I think you mean no more humans. Gross

Yes. I am seeing a big push to use vanilla js for single file html apps that are easy to build, deploy and distribute because they have no build step. I could see component libraries emerging that make it easier build from chat interfaces with less ceremony

i'm not sure the tradeoff in code readability is worth it as of now.

I wonder if it really needs to be worse. I am playing with the idea of fine tuning a model on my exact stack and coding patterns. I suspect I could get better performance by training “taste” into a model rather than breadth.

I also wonder about JS only, Python only, etc models.

Maybe the future is a selection of local, specific stack trained models?


There is some recent work on modularizing knowledge in LLMs.

https://arxiv.org/html/2605.06663v1

It might be possible to train a big generalist that is a composition of modules, some of which can be dropped dynamically at inference time, depending on the prompt.


These models being able to generalise at coding will likely get worse if you remove high quality training data like all of python.

That approach has its advantages, but sometimes I want to generate code for a language or kind of project I’m not experienced with using the accepted best practices.

Fine tuning these models (at least with PPO or equivalent) requires even more VRAM than inference does, potentially 2-3 times more.

You could use PEFT? Operating on only a subset of weights is fairly standard practice nowadays …

Yes I used LoRA and it’s fine but I’m not convinced the model doesn’t end up more stupid and less general

I built my own IDE and run my own model specifically to have private agentic coding. I can still access model APIs but I can be purely local if I want too. It’s amazing.

Curious, why did Zed with ACP not work for you?

Because I wanted the full ide on my iPhone so I can code while away from my laptop doing fun stuff with my kids. And I don’t like the Claude codex fire and forget approach.

The ide I built has a full terminal, file system, git integration and AI agent. It uses a private cloud Linux container that is persistent so I can install packages and do anything I want from any phone, computer or browser. It’s amazing that we live in a time where we can build custom software for ourselves just for fun. I will never have to worry about cursor or vs changing getting bought and moth balled like Atom (my favorite ide). I now own my tool and will forever.


Literally will break overnight when some key dependency changes. Your LLM might not be able to fix it. Then i guess you regenerate it all from scratch? Sounds exhausting tbh.

I’ve built enterprise software for 10 years with multiple upgrades over that time. With good test coverage and the right abstractions maintenance is feasible.

Also, because I wrote and own the code I don’t have to update if I don’t want to. I could choose instead to build around the dependency. That’s much more control over than when Microsoft bought GitHub and destroyed the Atom ide which I loved in favor of vscode which I still hate


I'm just guessing, but IDE which is using 3D acceleration just for stupid UI to run "smoothly", that is ridiculous.

Who runs IDE with LLM agents accessing your local filesystem, on bare metal?

Or am I alone to run everything LLM related on my VM just for development work. Then because of ZED genius decision, you need to share your GPU to VM, then some important features will not work, like snapshots. So you also need workaround for this, etc.

Too much hassle, Zed is not for me.

But I'm anti-Apple, so maybe that's the reason :)

Btw, even "ImHex" devs realized this and they're providing version without acceleration for VM use. They're using ImGui. Using it for local desktop app UI is also ridiculous, imho. Whatever.


I would imagine running a local LLM for development isn’t as popular as using a hosted provider. I don’t personally host a local model, but I have shared GPUs and storage volumes with VMs and I didn’t see it as that much of a hassle. What kinds of problems are you running into?

Doesn’t ghostty also use graphics acceleration? I was under the impression that rendering text is a relatively challenging graphics compute task.


I run local LLM on my MacBook together with frontier models for different tasks. I am in the process of setting up a 3 Mac studio system to serve AI to my team.

What's wrong with using a 3d accelerator and falling back to CPU graphics if needed? Pixels / joule is orders of magnitude better on an iGPU than on the CPU. (Which can matter over a 8-12 hour editing session, maybe.)

Modern IDEs don't use 3D at all, nor do they use the sprite-like 2D graphics that GPUs excel at and that can accelerate, e.g. mobile touch- and swipe-based UX. The main thing they do is font rendering, and accelerating that on GPU while keeping visual quality unchanged is quite complicated. The graphics pipeline doesn't really help all that much.

Agents are read-only per default in Zed. You should really get off your high horse.

Exactly this. The assumption that your access will last is very risky. Or that Chinese companies will keep trying to erode the economic viability of American models by open sourcing the reversed engineered models for ever is naive.

I don’t know why, but this project has me irrationally excited!

Interesting project. I am building an IDE for my phone and browser (www.propelcode.app) and have evaluated a few container architectures and providers. It was quite painful to get a prototype working. I will try your platform and would be happy to give feedback.

Much appreciated! and good luck with your project

What’s the best way to give you user feedback? What would be most helpful? What’s your ideal customer profile?

oz dot katz at treeverse.io would be best. ICP is SMB/mid-sized ISVs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: