Hacker Newsnew | past | comments | ask | show | jobs | submit | Aurornis's commentslogin

This article is hard to take seriously when it presents 25 Gbps internet like it's available everywhere in Switzerland. Even the page you click on has an "Up to" and requires you to enter your address to check availability.

That's on top of the usual problems with comparing small European countries to all of America. Switzerland's entire population is barely larger than the population of New York City. There are several metro areas in the US with more people than Switzerland.

Switzerland is also very, very small. It's lands mass is equal to about 0.5% of the United States. We only have a handful of states smaller than Switzerland.

There are valid geopolitical discussions to be had, but it's hard to read these articles that single out tiny little European countries and compare them to the sprawling United States and ignore the elephant in the room.


Pulling or pushing Docker images, downloading LLM models, installing AAA games from Steam. There are so many use cases that you won't see if you're just doing email and web browsing with a little bit of video streaming.

It's also helpful for off-site backup. I believe off-site backup is very important, and having gigabit upload is very helpful for this.

> I don't understand the desire (fetish?)

If you don't need it then you should be happy with what you've got, but calling other people's uses a "fetish" is unnecessary. And weird.


I agree that calling it a fetish is weird, but I also have a hard time believing ordinary people are pulling and pushing docker images all the time.

The reality is that the reason these high speed internet political initiatives fail is that for most people internet access is a solved problem, and there isn't the critical mass of people to push through legislation.

Which is not to say that for a minority, it's not a solved problem, but the desires of those in a minority situation don't get prioritized in the democratic process.


Gigabit is incredible if you are a dev

It's Stockholm syndrome from what I can tell.

The earlier threads from the Collabora side were also disappointing in how childish all of their arguments were structured. I read their posts and could barely understand what was being claimed in between all of the sarcasm and attacks, and I wasn't alone in the comments here.

From the outside, this entire situation is obviously very heated. What seems to be missing is some adults in the room who can turn down the tempers, get everyone to take a beat, and then start coming to some reasonable compromises.

Instead it feels like we're seeing the inevitable boiling over of passionate people who couldn't work well together and failed to find ways to cool off and work together.

It's a sad situation to watch.


> AFAIK some types of drug tests can't measure whether you're high on $drug now or if you're taken it before and you're sober now. If you're driving sober but you've taken $drug yesterday, you might be arrested for DWI.

This is true for THC (marijuana) tests that look for metabolites of THC with long half-lives. It takes a very long time for the body to metabolize THC through the different steps and then eliminate those metabolites.

For many of the drugs named in the article their elimination is rapid. For some like fentanyl their concentrations are also low. If someone has an appreciable concentration of fentanyl detectable by a simple test they are very inebriated.


All of the simple drug tests are intended for use as screening tools, with positive results sent to labs for verification.

> Even colorimetric test makers say their products only screen for the possibility of illegal drugs – and should not be considered tools for verification.

> “NOTE: ALL TEST RESULTS MUST BE CONFIRMED BY AN APPROVED ANALYTICAL LABORATORY!” reads one warning for a pack of colorimetric tests.

They should have known this and followed proper procedure.

Also keep this in mind for your employment drug screening. This will typically use more advanced tests for the first pass, but if someone comes up positive then they should automatically send it to the more advanced and accurate screening step. The good testing companies do this automatically but I’ve heard stories where some cheap testing companies did not do this and falsely accused employees didn’t know to request the accurate screening step.


Refreshing to see an honest and balanced take on AI coding. This is what real AI-assisted coding looks like once you get past the initial wow factor of having the AI write code that executes and does what you asked.

This experience is familiar to every serious software engineer who has used AI code gen and then reviewed the output:

> But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti14. I didn’t understand large parts of the Python source extraction pipeline, functions were scattered in random files without a clear shape, and a few files had grown to several thousand lines. It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision,

Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever.

Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything.

This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.


+1

I’ve been driving Claude as my primary coding interface the last three months at my job. Other than a different domain, I feel like I could have written this exact article.

The project I’m on started as a vibe-coded prototype that quickly got promoted to a production service we sell.

I’ve had to build the mental model after the fact, while refactoring and ripping out large chunks of nonsense or dead code.

But the product wouldn’t exist without that quick and dirty prototype, and I can use Claude as a goddamned chainsaw to clean up.

On Friday, I finally added a type checker pre-commit hook and fixed the 90 existing errors (properly, no type ignores) in ~2 hours. I tried full-agentic first, and it failed miserably, then I went through error by error with Claude, we tightened up some exiting types, fixed some clunky abstractions, and got a nice, clean result.

AI-assisted coding is amazing, but IMO for production code there’s no substitute for human review and guidance.


My process: start ideating and get the AI to poke holes in your reasoning, your vision, scalability, etc. do this for a few days while taking breaks. This is all contained in one Md file with mermaid diagrams and sections.

Then use ideation to architect, dive into details and tell the AI exactly what your choices are, how certain methods should be called, how logging and observability should be setup, what language to use, type checking, coding style (configure ruthless linting and formatting before you write a single line of code), what testing methodology, framework, unit, integration, e2e. Database, changes you will handle migrations, as much as possible so the AI is as confined as possible to how you would do it.

Then, create a plan file, have it manage it like a task list, and implement in parts, before starting it needs to present you a plan, in it you will notice it will make mistakes, misunderstand some things that you may me didn’t clarify before, or it will just forget. You add to AGENTS.md or whatever, make changes to the ai’s plan, tell it to update the plan.md and when satisfied, proceed.

After done, review the code. You will notice there is always something to fix. Hardcoded variables, a sql migration with seed data that should actually not be a migration, just generally crazy stuff.

The worst is that the AI is always very loose on requirements. You will notice all its fields are nullable, records have little to no validation, you report an error when testing and it tried to solve it with an brittle async solution, like LISTEN/NOTIFY or a callback instead of doing the architecturally correct solution. Things that at scale are hell to debug, especially if you did not write the code.

If you do this and iterate you will gradually end up with a solid harness and you will need to review less.

Then port it to other projects.


LISTEN/NOTIFY is not brittle, we use it for millions of events per day.

I agree! It should be very stable, IMO. If not, then please send a bug report and we'll look into it. Also, now it scales well with the number of listening connections (given clients listen on unique channel names): https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit...

I find it very interesting that you assume this method would branch out to other projects. I find it even more interesting that you assume all software codebases use a database, give a damn about async anything, and that these ideas percolate out to general software engineering.

Sounds like a solid way to make crud web apps though.


GP is clearly providing examples of categories of tasks. Sure, not all languages do “async fn foo()”, but almost all problem domains involve some sort of making sure the right things happen at the right times, which is in a similar ballpark.

Holier than thou “yeah well I work on stuff that doesn’t use databases, checkmate!” doesn’t really land - data still gets moved around somehow, and often over a network!


I’ve found that LLMs will frequently do extremely silly things that no person would do to make typescript code pass the typechecker.

You need to very specific and also question the output if it does something insane

This decade’s version of “works on my box”

Yeah, I've found LLMs cannot write good Typescript code period. The good news is that they are excellent at some other languages.

I caught it using Parameters<typeof otherfn>[2] the other day. It wanted to avoid importing a type, so it did this nonsense. (I might have the syntax slightly wrong here, I'm writing from memory.)

But it's not all bad news. TIL about Parameters<T>.


Fwiw, the article mirrors my experience when I started out too, even exactly with the same first month of vibecoding, then the next project which I did exactly like he outlined too.

Personally, I think it's just the natural flow when you're starting out. If he keeps going, his opinion is going to change and as he gets to know it better, he'll likely go more and more towards vibecoding again.

It's hard to say why, but you get better at it. Even if it's really hard to really put into words why


> he'll likely go more and more towards vibecoding again

I think "more and more" is doing some very heavy lifting here. On the surface it reads like "a lot" to many people, I think, which is why this is hard to read without cringing a bit. Read like that it comes off as "It's very addictive and eventually you get lulled into accepting nonsense again, except I haven't realized that's what's happening".

But the truth is that this comment really relies entirely on what "more and more" means here.


Given how addictive vibecoding is, I think it's very hard to be objective about the results if you are involved in the process.

It's a little like asking a cokehead how the addiction is going for him while he is high. Obviously he's going to say it's great because the consequences haven't hit him. Some percentage of addicts will never realize it was a problem at all.

Its not random that AI happens to be built by the very same people that turned internet forums into the most addictive communication technology ever.


You can’t put it into words? Why? Perhaps you haven’t looked at it objectively?

It may actually be true. Your feeling might be right - but I strongly caution you against trusting that feeling until you can explain it. Something you can’t explain is something you don’t understand.


really?

have you ever learned a skill? Like carving, singing, playing guitar, playing a video game, anything?

It's easy to get better at it without understanding why you're better at it. As a matter of fact, very very few people master the discipline enough to be able to grasp the reason for why they're actually better

Most people just come up with random shit which may or may not be related. Which I just abstained from.


You can get better at something without understanding why, but you should be able to think about it and determine why fairly easily.

This is something everyone who cares about improving in a skill does regularly - examine their improvement, the reasons behind it, and how to add to them. That’s the basis of self-driven learning.


This is an absurd statement. There are many complex undertakings in sport where even the very best get better with practice and can't tell you why. In fact, the ones who think they can tell you why are the one's to be most skeptical of.

You are just making stuff up or regurgitating material from a pop science book.


They can't tell you (not everyone is eloquent), but they sure know why. Struggling to put something in word is not the same as not knowing.

Not really. I can obviously say something, like you learn which features the models are able to actually implement, and you learn how to phrase and approach trickier features to get the model too do what you want.

And that's not really explainable without exploring specific examples. And now we're in thousands of words of explanation territory, hence my decision to say it's hard to put it into words.


I think you’re handwaving away vague, ungrounded intuition and calling it learning.

For instance, if I say “I noticed I run better in my blue shoes than my red shoes” I did not learn anything. If I examine my shoes and notice that my blue shoes have a cushioned sole, while my red shoes are flat, I can combine that with thinking about how I run and learn that cushioned soles cause less fatigue to the muscles in my feet and ankles.

The reason the difference matters is because if I don’t do the learning step, when buy another pair of blue shoes but they’re flat soled, I’m back to square one.

Back to the real scenario, if you hold on to your ungrounded intuition re what tricks and phrasing work without understanding why, you may find those don’t work at all on a new model version or when forced to change to a different product due to price, insolvency, etc.


You're always free to stop at the level of abstraction at which you find a certain answer to be satisfying, but you can also keep digging. Why are flat shoes better? Well, it's to do with my gait. Ok, but why is my gait like that? Something-something musculoskeletal. Why is my body that way? Something-something genetic. OK, but why is that? And so on.

Pursued far enough, any line of thought will reach something non-deterministic - or, simply, That's The Way It Is - however unsatisfying that is to those of us who crave straightforward answers. Like it or not, our ground truth as human beings ultimately rests on intuition. (Feel free to say, "No, it's physics", or "No, it's maths", but I'll ask you if you're doing those calculations in your head as you run!)


I've learned a number of skills, and for me none of them worked in the way you're describing. I didn't learn to cut good miter joints by randomly vibe-sawing wood until I unlocked miter joints in the skill tree. I carefully studied the errors I made, and adjusted in ways I thought might correct them, some of which helped some of which did not. Then eventually I understood the relationship between my actions and the underlying principles in enough detail to consistently hit 45 degrees.

Isn't that example pretty reductive, in that you have a directly-measurable output? I mean, the joint is either 45° (well, 90°) or it's not. Zoom out a bit, and the skill-set becomes much less definable: are my cabinets good - for some intersection of well-proportioned, elegantly-finished, and fit for purpose, with well-chosen wood and appropriate hardware.

Mind you, I don't think the process of improvement in those dimensions is fundamentally different, just much less direct and not easily (or perhaps even at all) articulable.


Agree. This is such a good balanced article. The only things that still make the insights difficult to apply to professional software development are: this was greenfield work and it was a solo project. But that’s hardly the author’s fault. It would however be fantastic to see more articles like this about how to go all in on AI tools for brownfield projects involving more than one person.

One thing I will add: I actually don’t think it’s wrong to start out building a vibe coded spaghetti mess for a project like this… provided you see it as a prototype you’re going to learn from and then throw away. A throwaway prototype is immensely useful because it helps you figure out what you want to build in the first place, before you step down a level and focus on closely guiding the agent to actually build it.

The author’s mistake was that he thought the horrible prototype would evolve into the real thing. Of course it could not. But I suspect that the author’s final results when he did start afresh and build with closer attention to architecture were much better because he has learned more about the requirements for what he wanted to build from that first attempt.


This wasn't even just greenfield work, it included the exact type of work where AI arguably excels: extracting working code from an extant codebase (SQLite) as a reusable library. (It also included the type of work AI is really bad at: designing APIs sensibly.)

I'll take the other side of this.

Professional software engineers like many of us have a big blind spot when it comes to AI coding, and that's a fixation on code quality.

It makes sense to focus on code quality. We're not wrong. After all, we've spent our entire careers in the code. Bad code quality slows us down and makes things slow/insecure/unreliable/etc for end users.

However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

There are two forces contributing to this: (1) more people coding smaller apps, and (2) improvements in coding models and agentic tools.

We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.

At the same time, technology is improving, and the AI is increasingly good at designing and architecting software. We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving. And even when it finally does, if such a point comes, there will still be many years of improvements in tooling, as humanity's ability to make effective use of a technology always lags far behind the invention of the technology itself.

So I'm right there with you in being annoyed by all the hype and exaggerated claims. But the "truth" about AI-assisted coding is changing every year, every quarter, every month. It's only trending in one direction. And it isn't going to stop.


> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strongly disagree with this thesis, and in fact I'd go completely the opposite: code quality is more important than ever thanks to AI.

LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

I'm using AI on a pretty shitty legacy area of a Python codebase right now (like, literally right now, Claude is running while I type this) and it's struggling for the same reason a human would struggle. What are the columns in this DataFrame? Who knows, because the dataframe is getting mutated depending on the function calls! Oh yeah and someone thought they could be "clever" and assemble function names via strings and dynamically call them to save a few lines of code, awesome! An LLM is going to struggle deciphering this disasterpiece, same as anyone.

Meanwhile for newer areas of the code with strict typing and a sensible architecture, Claude will usually just one-shot whatever I ask.

edit: I see most replies are saying basically the same thing here, which is an indicator.


I agree entirely with your statement that structure makes things easier for both LLMs and humans, but I'd gently push back on the mutation. Exactly as mutation is fine for humans it also seems to be fine for LLMs in that structured mutation (we know what we can change, where we can change it and to what) works just fine.

Your example with the dataframes is completely unstructured mutation typical of a dynamic language and its sensibilities.

I know from experience that none of the modern models (even cheap ones) have issues dealing with global or near-global state and mutating it, even navigating mutexes/mutices, conds, and so on.


> LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

We don't see these apps because we're professional software engineers working on the other stuff. But we're rapidly approaching a world where more and more software is created by non-professionals.


> That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

I agree that there will be more small, single-use utilities, but you seem to believe that this will decrease the number or importance of traditional long-lived codebases, which doesn't make sense. The fact that Jane Q. Notadeveloper can vibe code an app for tracking household chores is great, but it does not change the fact that she needs to use her operating system (a massive codebase) to open Google Chrome (a massive codebase) and go to her bank's website (a massive codebase) to transfer money to her landlord for rent (a process which involves many massive software systems interacting with each other, hopefully none of which are vibe coded).

The average YouTuber not caring about the aperture of their lens is an apt comparison: the median YouTube video has 35 views[0]. These people likely do not care about their camera or audio setup, it's true. The question is, how is that relevant to the actual professional YouTubers, MrBeast et al, who actually do care about their AV setup?

[0] https://www.intotheminds.com/blog/en/research-youtube-stats/


This is where I get into much more speculative land, but I think people are underestimating the degree to which AI assistant apps are going to eat much of the traditional software industry. The same way smart phones ate so many individual tools, calculators, stop watches, iPods, etc.

It takes a long time for humanity to adjust to a new technology. First, the technology needs to improve for years. Then it needs to be adopted and reach near ubiquity. And then the slower-moving parts of society need to converge and rearrange around it. For example, the web was quite ready for apps like Airbnb in the mid 90s, but the adoption+culture+infra was not.

In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it. Google already correctly realizes this as an existential threat, as do many SaaS companies.

AI assistants are already good enough to create ephemeral applications on the fly in response to certain questions. And we're in the very, very early days of people building businesses and infra meant to be consumed by LLMs.


> In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it.

And how do you think their assistant will interact with external systems? If I tell my AI assistant "pay my rent" or "book my flight" do you think it's going to ephemerally vibe code something on the banks' and airlines' servers to make this happen?

You're only thinking of the tip of the iceberg which is the last mile of client-facing software. 90%+ of software development is the rest of the iceberg, unseen beneath the surface.

I agree there will be more of this but again, that does not preclude the existence of more of the big backend systems existing.


I don't think we disagree. We still have big mainframe systems from the 70s and beyond that a powering parts of society. I don't think all current software systems are just going to die or disappear, especially not the big ones. But I do think significant double digit percentages of software engineers are working on other types of software that are at risk of becoming first- or second- or third-order casualties in a world where ephemeral AI assistant-generated software and vibe coded bespoke software becomes increasingly popular.

The thing, everything you describe may be easy for an average person in the future. But just having your single AI agent do all of that will be even easier and that seems like where things will go.

Just like everyone has a 3D printer at home?

People want convenience, not a way to generate an application that creates convenience.


However, code quality is becoming less and less relevant in the age of AI coding

It actually becomes more and more relevant. AI constantly needs to reread its own code and fit it into its limited context, in order to take it as a reference for writing out new stuff. This means that every single code smell, and every instance of needless code bloat, actually becomes a grievous hazard to further progress. Arguably, you should in fact be quite obsessed about refactoring and cleaning up what the AI has come up with, even more so than if you were coding purely for humans.


Even non-frontier models now offer a context window of 1 million tokens. That's 100K-300K LOCs. I would not call that a limited context.

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strong disagree. I just watched a team spend weeks trying to make a piece of code work with AI because the vibe coded was spaghetti garbage that even the AI couldn’t tell what needed to be done and was basically playing ineffective whackamole - it would fix the bug you ask it by reintroducing an old bug or introducing a new bug because no one understood what was happening. And humans couldn’t even step in like normal because no one understood what’s going on.


Okay, so you observed one team that had an issue with AI code quality. What's your point?

In 1998, I'm sure there were newspaper companies who failed at transitioning online, didn't get any web traffic, had unreliable servers crashed, etc. This says very little about what life would be like for the newspaper industry in 1999, 2000, 2005, 2010, and beyond.


Im arguing that code quality very much still matters and will only continue to matter.

AI will get better at making good maintainable and explainable code because that’s what it takes to actually solve problems tractably. But saying “code quality doesn’t matter because AI” is definitely not true both experientially and as a prediction. Will AI do a better job in the future? Sure. But because their code quality improves not because it’s less important.


Well then sure, we can agree there, it's just a matter of phrasing then.

Then you may want to clarify what your phrasing meant because I couldn’t find a more charitable interpretation

More and more software will be built by non-experts, software that has smaller user bases and simpler use cases and doesn't need to be maintained as much if at all. "Poor AI code quality" matters much less for these than for say, software written by developers at FAANG companies, since literally nobody will ever even look at the code.

Where we're headed is toward a world where a ton of software is ephemeral, apps literally created by AI out of thin air for a single use, and then gone.


Ephemeral in the same way the electrical wiring in an old house is ephemeral.

Which is to say, not at all.

Original wiring done by a professional, later changes by “vibe electrician” homeowners.

Every circuit might be a custom job, but they all accumulate into something a SWE calls “technical debt”.

Don’t like how the toaster and the microwave are on the same circuit even though they are in different parts of the kitchen? You’re lucky if you can even follow the wiring back to the circuit box to see how it was done. The electrical box is so much of a mess where would you even run a new circuit?

That’s the future we’re looking at.


No ephemeral as in: I'll ask the AI to check my email, and it'll create a bespoke table UI on the fly right inside my AI assistant, and populate it with relevant email data. And I'll use it, and then it will disappear. Software created and destroyed in a moment.

Not all software is meant to be some permanent building block upon which other software sits.

When new technology arrives that makes earlier ways of doing things obsolete, the consistent pattern throughout history has been that existing experts and professionals significantly underestimate the changes to come, in large part because (a) they don't like those changes, and (b) they're too used to various constraints and priorities that used to be important but no longer are. In other words, they're judging the new tech the lens of an older world, rather than through the lens of a newer world created by the new tech.


There's almost no point in arguing about this anymore. Neither you nor the other person are going to be convinced. We just have to wait and see if a new crop of 100x productivity AI believer companies come along and unseat all the incumbents.

It seems that your opinion is based on expectations for the future then, which is notoriously difficult to predict.

It's not that hard to predict that obviously useful new technology is going to improve over time.

Guns, wheels, cars, ships, batteries, televisions, the internet, smartphones, airplanes, refrigeration, electric lighting, semiconductors, GPS, solar panels, antibiotics, printing presses, steam engines, radio, etc. The pattern is obvious, the forces are clear and well-studied.

If there is (1) a big gap between current capabilities and theoretical limits, (2) huge incentives for those who to improve things, (3) no alternative tech that will replace or outcompete it, (4) broad social acceptance and adoption, and (5) no chance of the tech being lost or forgotten, then technological improvement is basically a guarantee.

These are all obviously true of AI coding.


That list cherry picks all the successful cases where the technology improved while ignoring the many, many others where it didn't and the technology improved no further. That's dishonest.

It isn't even a good job of cherry picking: we never got mainstream supersonic passenger aircraft after the Concorde because aerospace technology hasn't advanced far enough to make it economically viable and the decrease in progress and massively increasing costs in semiconductors for cutting edge processes is very well known.


You're not factoring in the list of constraints I provided.

There's no broad social acceptance of supersonic flight because it creates incredibly loud sonic booms that the public doesn't want to deal with. And despite that, it's still a bad counterexample, as companies continue to innovate in this area e.g. Boom Supersonic.

At best you can say, "It's taking longer than expected," but my point was never that it will happen on any specific schedule. It took 400 years for guns to advance from the primitive fire lances in China to weapons with lock mechanisms in the 1400s. Those long time frames only prove my point even more strongly. Progress WILL happen, when there is appetite and acceptance and incentive and room to grow, and time is no obstacle. It's one of the more certain things in human history, and the forces behind it have been well studies.

Just as certain: the people and jobs who are obsoleted by these new technologies often remain in denial until they are forgotten.


If code quality only stops mattering in 400 years (whatever that definition happens to be) then the prediction that it makes is worthless in terms of what you should do today. You use it to argue it’s unimportant deal with it, but if it’s a 400 year payoff you’ve made the wrong bet.

Surely you don't think AI coding technology will be as slow to develop as guns were.

We're obviously talking about 1-10 years here, not 100-1000 years.


It’s really hard to predict where exponential progress will freeze. I was reading the other day that the field seems to have stagnated again in terms of no really meaningful ideas to overcome the inherent bottlenecks we’ve hit now in terms of diminishing returns for scaling. I’m not a pessimist or unbridled optimist but I think it’s fundamentally difficult to predict and the law of averages suggests someone will end up crowing about being right

But hindsight is 20/20 as they say. In 2020 people predicted that Facebook Horizon would only go one direction, always improve and become as pervasive as the internet. So when you predict that the design and architecture capabilities of models will continue to improve, thus making code quality irrelevant, you sound very confident. And if in five years you are right, you will brag about it here. If not, well I for one will not track you down and rub it in your face. Peace out.

You're confusing betting on a company/product vs betting on technological improvement in general.

It is absolutely the case that virtual reality technology will only get better over time. Maybe it'll take 5, or 10, or 20, or 40 years, but it's almost a certainty that we'll eventually see better AR/VR tech in the future than we have in the past.

Would you bet against that? You'd be crazy to imo.


There's a kid outside the window of the place I'm staying who's been in the yard playing and talking with people online through his VR headset for like 2+ hours. He's living in the future. Whatever happens, he and his friends are going to continue to be interested in more of this.

Whether what they're using in 20 years is produced by the company formerly known as Facebook or not is a whole different question.


The newspaper industry is the perfect analogy, because it is effectively dead. Wholesale dead. Here and there, the biggest, most world-renowned papers are still alive, on life-support... NYT, WSJ, etc. But they're all dead. Their death has caused the absolute destruction of an entire industry sector and has given gangrene to adjacent industries that they will soon succumb to. The point about 1998 wasn't that there was this transition that demanded careful attention and wise strategy, but that death was coming for it no matter what anyone did to stop it.

The death of newspapers is quite the spectacle too. No one seems to understand how bad it is... the youngest generation can't even seem to recognize that anything is missing. We've effectively amateurized journalism so that only grifters and talentless hacks want to attempt it, and only in tiny little soundbites on Twitter or other social media (and they're quickly finding out how it might be more lucrative to do propaganda for foreign governments or MLM charlatanism). When the death of the software industry is complete, it too will have been completely amateurized, the youngest generation will not even appreciate that people used to make it for a living, and the few amateurs doing it will start to comprehend how much more lucrative it will be to just make poorly disguised malware.


I don't buy this at all. Code quality will always matter. Context is king with LLMs, and when you fill that context up with thousands of lines of spaghetti, the LLM will (and does) perform worse. Garbage in, garbage out, that's still the truth from my experience.

Spaghetti code is still spaghetti code. Something that should be a small change ends up touching multiple parts of the codebase. Not only does this increase costs, it just compounds the next time you need to change this feature.

I don't see why this would be a reality that anyone wants. Why would you want an agent going in circles, burning money and eventually finding the answer, if simpler code could get it there faster and cheaper?

Maybe one day it'll change. Maybe there will be a new AI technology which shakes up the whole way we do it. But if the architecture of LLMs stays as it is, I don't see why you wouldn't want to make efficient use of the context window.


I didn't say that you "want" spaghetti code or that spaghetti code is good.

I said that (a) apps are getting simpler and smaller in scope and so their code quality matters less, and (b) AI is getting better at writing good code.


Apps are getting bigger and more ambitious in scope as developers try to take advantage of any boost in production LLMs provide them.

Every metric I've seen points to there being an explosion in (a) the number of apps that exist and (b) the number of people making applications.

What relevance do either of those claims have to the claim of the comment you are responding to?

Are you trying to imply that having more things means that each of them will be smaller? There are more people than there were 500 years ago - are they smaller, or larger?

Also, the printing press did lead to much longer works. There are many continuous book series that have run for decades, with dozens of volumes and millions of words. This is a direct result of the printing press. Just as there are television shows that have run with continuous plots for thousands of hours. This is a consequence of video recording and production technologies; you couldn't do that with stage plays.

You seem to be trying to slip "smaller in scope" into your statement without backing, even though I'd insist that applications individuals wrote being "smaller in scope" was a obvious consequence of the tooling available. I can't know everything, so I have to keep the languages and techniques limited to the ones that I do know, and I can't write fast enough to make things huge. The problems I choose to tackle are based on those restrictions.

Those are the exact things that LLMs are meant to change.


The average piece written and published today today is much shorter than the average piece from the past. Look at Twitter. Social media in general. Internet forums. Blog posts. Emails. Chats. Etc. The amount of this content DWARFS other content.

The same is true of most things that get democratized. Look at video. TikTok, YouTube, YouTube shorts.

Look at all the apps people are building are building for themselves with AI. They are typically not building Microsoft Word.

Of course there will be some apps that are bigger and more ambitious than ever. I myself am currently building an app that's bigger an more ambitious than I would have tried to build without AI. I'm well aware of this use case.

But as many have pointed out, AI is worse at these than at smaller apps. And pretending that these are the only apps that matter is what's leading developers imo to over-value the importance of code quality. What's happening right now that's invisible to most professional engineers is an explosion in the number of time, bespoke personal applications being quickly built by non-developers are that are going to chip away at people's reasons to buy and use large, bloated, professional software with hundreds of thousands of users.


> Look at all the apps people are building are building for themselves with AI.

The apps those people were making before LLMs became ubiquitous were no apps. So by definition they are now larger and more ambitious.


There's already been an explosion of apps - and most of them suck, are spam, or worse, will steal your data.

We don't need more slop apps, we already have that and have for years.


The Jevons paradox says otherwise. As producing apps becomes cheaper, we will not be able to help ourselves: we will make them larger until they fill all available space and cost just as much to produce and maintain.

That's the incorrect application of the Jevons Paradox. We won't get bigger apps, we'll get more apps.

Think about what happened to writing when we went from scribes to the printing press, and from the printing press to the web. Books and essays didn't get bigger. We just got more people writing.


I’ve been told repeatedly now that if AI coding isn’t working for me it’s because my projects code quality is too poor so the agents can’t understand it.

Now I’m being told code quality doesn’t matter at all.


Nothing you wrote seems to support what you said at the start there. Why is the importance of code quality decreasing?

  > However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

  > [...]

  > We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.
I do agree with the fact that more and more people are going to take advantage of agentic coding to write their own tools/apps to maker their life easier. And I genuinely see it as a good thing: computers were always supposed to make our lives easier.

But I don't see how it can be used as an argument for "code quality is becoming less and less relevant".

If AI is producing 10 times more lines that are necessary to achieve the goal, that's more resources used. With the prices of RAM and SSD skyrocketing, I don't see it as a positive for regular users. If they need to buy a new computer to run their vibecoded app, are they really reaping the benefits?

But what's more concerning to me is: where do we draw the line?

Let's say it's fine to have a garbage vibecoded app running only on its "creator" computer. Even if it gobbles gigabytes of RAM and is absolutely not secured. Good.

But then, if "code quality is becoming less and less relevant", does this also applies to public/professional apps?

In our modern societies we HAVE to use dozens of software everyday, whether we want it or not, whether we actually directly interact with them or not.

Are you okay with your power company cutting power because their vibecoded monitoring software mistakenly thought you didn't paid your bills?

Are you okay with an autonomous car driving over your kid because its vibecoded software didn't saw them?

Are you okay with cops coming to your door at 5AM because a vibecoded tool reported you as a terrorist?

Personally, I'm not.

People can produce all the trash they want on their own hardware. But I don't want my life to be ruled by software that were not given the required quality controls they must have had.


> If AI is producing 10 times more lines that are necessary to achieve the goal, that's more resources used. With the prices of RAM and SSD skyrocketing, I don't see it as a positive for regular users. If they need to buy a new computer to run their vibecoded app, are they really reaping the benefits?

I mean, I agree, but you could say this at any point in time throughout history. An engineer from the 1960s engineer could scoff at the web and the explosion in the number of progress and the decline in efficiency of the average program.

An artist from the 1700s would scoff at the lack of training and precision of the average artist/designer from today, because the explosion in numbers has certain translated to a decline in the average quality of art.

A film producer from the 1940s would scoff at the lack of quality of the average YouTuber's videography skills. But we still have millions of YouTubers and they're racking up trillions of views.

Etc.

To me, the chief lesson is that when we democratize technology and put it in the hands of more people, the tradeoff in quality is something that society is ready to accept. Whether this is depressing (bc less quality) or empowering (bc more people) is a matter of perspective.

We're entering a world where FAR more people will be able to casually create and edit the software they want to see. It's going to be a messier world for sure. And that bothers us as engineers. But just because something bothers us doesn't mean it bothers the rest of the world.

> But then, if "code quality is becoming less and less relevant", does this also applies to public/professional apps?

No, I think these will always have a higher bar for reliability and security. But even in our pre-vibe coded era, how many massive brandname companies have had outages and hacks and shitty UIs? Our tolerance for these things is quite high.

Of course the bigger more visible and important applications will be the slowest to adopt risky tech and will have more guardrails up. That's a good thing.

But it's still just a matter of time, especially as the tools improve and get better at writing code that's less wasteful, more secure, etc. And as our skills improve, and we get better at using AI.


Maybe, but how exactly are you defining "code quality" ?

If strongly typed languages are preferred for AI coding, maybe the fixation on code quality make LLM produce better code.

> nobody will ever have to maintain it, so it doesn't matter

I'm curious about software that's actively used but nobody maintains it. If it's a personal anecdote, that's fine as well


I mean I've written some scripts and cron jobs for websites that I manage that have continued trucking for years with no changes or monitoring on my end. I suppose it's a bit easier on the web.

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

It's the opposite, code quality is becoming more and more relevant. Before now you could only neglect quality for so long before the time to implement any change became so long as to completely stall out a project.

That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.

> We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving.

We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.

> It's only trending in one direction. And it isn't going to stop.

Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.

Think about it this way: If your job survived the popularity of offshoring to engineers paid 10% of your salary, why would AI tooling kill it?


> That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.

What you're missing is that fewer and fewer projects are going to need a ton of technical depth.

I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.

> We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.

The genie is out of the bottle. Humanity is not going to stop pouring more and more money into AI.

> Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.

The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999. Maybe you will be right about short term economic trends, but the underlying technology is here to stay and will only trend in one direction: better, cheaper, faster, more available, more widely adopted, etc.


> What you're missing is that fewer and fewer projects are going to need a ton of technical depth. > I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.

Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.

> Humanity is not going to stop pouring more and more money into AI.

There's no more money to pour into it. Even if you did, we're out of GPU capacity and we're running low on the power and infrastructure to run these giant data centres, and it takes decades to bring new fabs or power plants online. It is physically impossible to continue this level of growth in AI investment. Every company that's invested into AI has done so on the promise of increased improvement, but the moment that stops being true everything shifts.

> The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999.

The internet bubble did pop. What happened after is an assessment of how much the tech is actually worth, and the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?

Once the hype fades, the long-term unsuitability for large projects becomes obvious, and token costs increase by ten or one hundred times, are businesses really going to pay thousands of dollars a month on agent subscriptions to vibe code little apps here and there?


> Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.

This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.

When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.

When the web was new, the news media complained about the same thing. A landscape of poorly researched error-ridden microblogs with spelling mistakes and inaccurate information. And you know what? They were right. That's exactly what the internet led to. And now that's the world we live in, and 90% of those news media companies are dead or irrelevant.

And here you are continuing the tradition of discussing a new landscape of buggy, vulnerable products. And the same thing will happen and already is happening. People don't care. When you democratize technology and you give people the ability to do something useful they never could do before without having to spend years becoming an expert, they do it en masse, and they accept the tradeoffs. This has happened time and time again.

> The internet bubble did pop... the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?

You cut out the part where I said it only popped economically, but the technology continued to improve. And the situation we have now is even better than the hype in 1999:

They predicted video on demand over the internet. They predicted the expansion of broadband. They predicted the dominance of e-commerce. They predicted incumbents being disrupted. All of this happened. Look at the most valuable companies on earth right now.

If anything, their predictions were understated. They didn't predict mobile, or social media. They thought that people would never trust SaaS because it's insecure. They didn't predict Netflix dominating Hollywood. The internet ate MORE than they thought it would.


Your whole argument is based on 'the technology improves'.

Ok, so another fundamental proposition is monetary resources are needed to fund said technology improvement.

Whats wrong with LLMs? They require immense monetary resources.

Is that a problem for now? No because lots of private money is flowing in and Google et al have the blessing of their shareholders to pump up the amount of cash flows going into LLM based projects.

Could all this stop? Absolutely, many are already fearing the returns will not come. What happens then? No more huge technology leaps.


This has literally never happened in the history of humanity. Name one technology where development permanently stopped due to lack of funding, despite there being...

1. lots of room for progress, i.e. the theoretical ceiling dwarfed the current capabilities

2. strong incentives to continue development, i.e. monetary or military success

3. no obviously better competitors/alternatives

4. social/cultural tolerance from the public

Literally hasn't happened. Even if you can find 1 or 2 examples, they are dwarfed by the hundreds of counter examples. But more than likely, you won't find any examples, or you'll just find something recent where progress is ongoing.

Useful technology with room to improve almost always improves, as people find ways to make it better and cheaper. AI costs have already fallen dramatically since LLMs first burst on the scene a few years back, yet demand is higher than ever, as consumers and businesses are willing to pay top dollar for smarter and better models.


AI has none of these things.

1. As I said before, we've long since reached diminishing returns on models. We simply don't have enough compute or training data left to make them dramatically better.

2. This is only true if it actually pans out, which is still an unknown question.

3. Just... not using it? It has to justify its existence. If it's not of benefit vs. the cost then why bother.

4. The public hates AI. The proliferation of "AI slop" makes people despise the technology wholesale.


1. Saying that AI will never approach its theoretical limits because XYZ tech is approaching diminishing returns, is like saying guns would never get better than the fire sticks of China in 1000 AD because the then-current methods hit their theoretical limits. You're betting against tens of thousands of the smartest minds of a generation across the entire planet. I will happily take the other side of this bet.

2. Sure, depends on #1. But the incentive is undeniable.

3. It has. Do you think people are using Claude Code in incredible numbers for no reason?

4. The public and businesses are adopting AI en masse. It's incredibly useful. Demand is skyrocketing. I don't think you could show that negative public sentiment has been sufficient to stop this, any more than negative sentiment about TVs, headphones, bicycles, etc (which was significant).

With the exception of #1, I feel like you're arguing that things won't happen, where the numbers show they've already have happened and are accelerating.


Thanks for jumping in fella. Agree on all points.

> This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.

What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.

Over the past 20 years software engineering has become something that just about anyone can do with little more than a shitty laptop, the time and effort, and an internet connection. How is a world where that ability is rented out to only those that can pay "democratic"?

> When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.

A bad book is just a bad book. If a novel is $10 at the airport and it's complete garbage then I'm out $10 and a couple of hours. As you say, who cares. A bad vibe coded app and you've leaked your email inbox and bank account and you're out way more than $10. The risk profile from AI is way higher.

Same is even more true for businesses. The cost of a cyberattack or a outage is measured in the millions of dollars. It's a simple maths, the cost of the risk of compromise far oughtweights the cost of cheaper upfront software.

> You cut out the part where I said it only popped economically, but the technology continued to improve.

The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.


"Thousands of dollars of GPU" as a one-time expense (not ongoing token spend) is dirt cheap if it meaningfully improves productivity for a dev. And your shitty laptop can probably run local AI that's good enough for Q&A chat.

On a SWE salary maybe. If the baseline cost of doing business is a $5k GPU you've excluded like a quarter of the US working population immediately.

Dont waste your time on him. He reminds me of people who are so concentrated on one part of the picture, they can't see the whole damn thing and how all the pieces fit and interact with each other.

You're describing yourself imo. Your point ignores hundreds of years of history and says zero about the forces that shape technological development and progress, which have been studied fairly exhaustively.

> What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.

"Renting your ability to do your job"?

I think you're misunderstanding the definition of democratization. This has nothing to do with programmers. It has nothing to do with people's jobs. Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."

In other words, democratizing is not about people who who have jobs as programmers. It's about the people who don't know how to code, who are not software engineers, who are suddenly gaining the ability to produce software.

Three years ago, you could not pay money to produce software yourself. You either had to learn and develop expertise yourself, or hire someone else. Today, any random person can sit down and build a custom to-do list app for herself, for free, almost instantly, with no experience.

> The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.

10-15 year payouts? Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW. Many tens of thousands of already gotten insanely rich, three years ago, and two years ago, and last year, and this year. If you think investors won't be motivated, and there aren't people currently in line to throw their money into the ring, you're extremely uninformed about investor sentiment and returns lol.

You can predict that the music will stop. That's fair. But to say that investors are worried about long payout times is factually inaccurate. The money is coming in faster and harder than ever.


I have no idea what this flood of personal-use software is that you think normal people want to produce. Normal people don't even think about software doing a thing until they see an advertisement about software that does a thing. And then they'd rather pay 10 bucks for it than to invent a shittier version of it themselves for $500.

And I'm not being condescending about normal people. Developers often don't think about the possibility of making software that does a particular thing until they actually see software that does that thing. And they're going to also going to prefer to buy than vibe code unless the program is small and insignificant.


Go look at the numbers from Lovable and Replit and Claude Code and similar companies. Quite staggering.

I myself have run an online community for early-stage startup founders for over a decade. The number of ambitious people who would love to build something but don't know how to code and in the last year or two have started cranking out applications is tremendous. That number is far higher than the number of software engineers who existed before.


That's very much an echo chamber you find yourself in. I'm far away from any technological center and the main use of LLM for people is the web search widget, spell checking and generating letters. Also kids cheating on their homework.

> Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."

Your definition only supports my point. The transfer of skill from something you learn to something you pay to do is the exact and complete opposite of your stated definition. It turns the activity from something that requires you to learn it to one that only those that can afford to pay can do.

It is quite literally making this technology, information, and power available to only the elite.

> Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW.

What payout? Zero AI companies are profitable. If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless. There's plenty of investors who stand to make a lot of money if these big companies exit, but there's no guarantee that will happen.

The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock. Neither of which have much do with the tech itself and everything to do with the hype.


> It is quite literally making this technology, information, and power available to only the elite.

I don't know what to say to you. More people are coding now with AI than ever coded before. If your argument was true, then that would just mean that there are more elites than ever. Obviously that's not what's happening.

> What payout? Zero AI companies are profitable.

Because they're reinvesting profits into continued R&D, not because their current products are unprofitable. You're failing to understand basic high-growth business models.

> If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless.

Plenty of AI companies have exited, and plenty of other AI companies offer tender offers where shareholders have been able to sell their shares to new investors. Again, it sounds like you just aren't really educated on what's happening. Plenty of people are millionaires in real life, not just on paper. You're massively incorrect about the payout landscape that investors are considering.

> The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock.

No, founders, early-stage investors, and employees with stock have cashed out in many cases. Again, it just feels like you're not aware of what's happening on the ground.

> Neither of which have much do with the tech itself and everything to do with the hype.

That's a very different argument. If you want to say that the investment is unsound, then fine, that's your opinion, but trying to say that investors have no appetite because they have to wait 10 to 15 years for a payout is incredibly incorrect.


> I don't know what to say to you. More people are coding now with AI than ever coded before. If your argument was true, then that would just mean that there are more elites than ever. Obviously that's not what's happening.

I don't know how I can explain this any more clearly.

If you need AI to create software, and the cost of AI is $200/month, then only people who can afford $200/month can create software.

Costs will increase. The current cost is substituted by investor funding. Sell at a loss to get people hooked on the product and then raise the price to make money, a "high-growth business model" as you say.

The cost to make a competitor to Anthropic or OpenAI is tens or hundreds of billions of dollars upfront. There will be few competitors and minimal market pressure to reduce prices, even if the unit costs of inference are low.

$200/month is already out of reach of the majority of the population. Increases from here means only a small percentage of the richest people can afford it.

I don't know what definition of "elite" you're using but, "technology limited so that only a small percentage of the population can afford it" is... an elite group.

This is fun and all, but I think we've reached the end of the productive discussion to be had and I don't have much more to say. Charitably, we're leaving in completely different realities. I just hope when the bubble pops the fall isn't too hard for you.


There is a lot you can do to shape the end result to not have these faults. In the end, the engineering mind and rigor still needs to apply, so the hard work doesn't go away.

But, the errors that are described - no architecture adhesion, lack of comprehension, random files, etc. are a matter of not leveling up the sophistication of use further, not a gap in those tools.

As an example. Very clearly laying out your architecture principles, guidance, how code should look on disk, theory on imports, etc. And then - objectively analyzing any proposed change against those principles, converges toward sane and understandable.

We've been calling it adversarial testing across a number of dimensions - architecture, security, accessibility, among other things. Every pr gets automatically reviewed and scored based on these perspectives. If an adversary doesn't OK the PR, it doesn't get merged.


> Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever. Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything. This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now.

What’s really happening is that you’re all of those people in the beginning. Those people are you as you go through the experience. You’re excited after seeing it do the impossible and in later instances you’re critical of the imperfections. It’s like the stages of grief, a sort of Kübler-Ross model for AI.


I'm deeply convinced that there's 2 reasons we don't see real takes like this: 1) is because these people are quietly appreciating the 2-50% uplift you get from sanely using LLMs instead of constantly posting sycophantic or doomer shit for clout and/or VC financing. 2) is because the real version of LLM coding is boring and unsexy. It either involves generating slop in one shot to POC, then restarting from scratch for the real thing or doing extensive remediation costing far more than the initial vibe effort cost; or it involves generally doing the same thing we've been doing since the assembler was created except now I don't need to remember off-hand how to rig up boilerplate for a table test harness in ${current_language}, or if I wrote a snippet with string ops and if statements and I wish it were using regexes and named capture groups, it's now easy to mostly-accurately convert it to the other form instead of just sighing and moving on.

But that's boring nerd shit and LLMs didn't change who thinks boring nerd shit is boring or cool.


> because the real version of LLM coding is boring and unsexy

Some people do find it unfun, saying it deprives them of the happy "flow" of banging out code. Reaching "flow" when prompting LLMs arguably requires a somewhat deeper understanding of them as a proper technical tool, as opposed to a complete black box, or worse, a crystal ball.


Software engineering is only about 20% writing code (the famous 40-20-40 split). Most people use it only for the first 40%, and very succesfully (im in that camp). If you use it to write your code you can theorettically maybe get 20% time improvement initially, but you loose a lot of time later redoing it or unraveling. Not worth bothering.

20% is one of those cool lies SWEs have been able to push through (like “our jobs are oh so very special we can’t really estimate it, we’ll create an entire sub-industries with our industry to make sure everyone knows we can’t estimate”).

SWEs spend 20% of the time writing code for exactly the same reason brick-layers spend 20% of their time laying bricks


The other 80% is spent on the following:

- A lot of research. Libraries documentation, best practice, sample solutions, code history,... That could be easily 60% of the time. Even when you're familiar with the project, you're always checking other parts of the codebase and your notes.

- Communication. Most projects involve a team and there's a dependency graph between your work. There may be also a project manager dictating things and support that wants your input on some cases.

- Thinking. Code is just the written version of a solution. The latter needs to exists first. So you spend a lot of time wrangling with the problem and trying to balance tradeoffs. It also involves a lot of the other points.

Coding is a breeze compared to the others. And if you have setup a good environment, it's even enjoyable.


There’s also just the negative association factor.

I use LLMs in my every day work. I’m also a strong critic of LLMs and absolutely loathe the hype cycle around them.

I have done some really cool things with copilot and Claude and I keep sharing them to within my working circle because I simply don’t want to interact that much with people who aren’t grounded on the subject.


I would be interested to hear your take on Copilot vs Claude. I have used Copilot (trial) in VS Code and I found it to mostly meet my needs. It could generate some plans and code, which I could review on the go. I found this very natural to me as I never felt 'left behind' in whatever code the AI was generating. However, most of the posts I see here are on Claude (I haven't tried it) and very few mentions of Copilot. What is your impression about them and the use cases each is strong in?

(Context: I'm a different person, but have thoughts on this)

I started using Copilot at work because that's what the company policy was. It's a pretty strict environment, but it's perfectly serviceable and gets a lot of fresh, vetted updates. IDE integration with vs code was a huge plus for me.

Claude code is definitely a messier, buggier frontend for the LLM. It's clunkier to navigate and it has much more primitive context management tools. IDE integration is clunky with vs code, too.

However, if you want to take advantage of the Anthropic subscription services, I've found Claude Code is the way to go... Simply because Anthropic works hard to lock you into their ecosystem if you want the sweet discounts. I'm greedy, so I bit the bullet for all of the LLM coding stuff I do in my personal life.


Copilot isn’t really a competing product to Claude - in fact I use Claude through copilot.

I have found in general that for the type of work I do (senior to staff level engineering, 90-10 research to programming) that Claude Opus is the only model really worth my time - but I just really like the Copilot CLI tooling.


So, are you using it for the 10%?

I do use LLMs to learn about new subjects but we already only bill 10% for "coding" and that's inflating it to cover other parts.

I can't imagine that slopping it up would be a great decision. Having alien code that no one ever understood between a bug report and a solution. Anthropic isn't going to give us money for our lost contracts, is it?


It can be any number of things. From spending hour or two just writing requirements, to giving an example of existing curated code from another project you wrote and would like to emulate, or rewriting existing apps in a different language/architecture (sort of like translating), to serving as a QA agent or reviewer for the LLM agent, or vice versa.

I kinda like how you can just use it for anything you like. I have bazillion personal projects, I can now get help with, polish up, simplify, or build UI for, and it's nice. Anything from reverse engineering, to data extraction, to playing with FPGAs, is just so much less tedious and I can focus on the fun parts.


I find the people who end up with spaghetti code did so because they didn’t translate their normal processes over.

Being completely methodical about development really helps. obra/superpowers, for example, gets close but I think it overindexes on testing and doesn’t go far enough with design document templates, planning, code style guides, code reviews, and more.

Being methodical about it takes more time, but prevents a good bit of the tech debt.

Planning modes help, but they are similarly not methodical enough.


That works until you make a plan/tests/etc, set the thing loose, and then when it has trouble it decides "actually the pragmatic thing would be [diverge from the plan/change the tests/etc]" and goes off the rails. I'm so frustrated by these things right now.

I have honestly not had that problem much. Being specific, concise, and strong with your prompts helps out a lot.

I feel like recently HN has been seeing more takes like this one and at least slightly less of the extremist clickbaity stuff. Maybe it's a sign of maturity. (Or maybe it's just fatigue with the cycle of hyping the absolute-latest model?)

It takes time for people to go through these experiences (three months, in OP's case), and LLMs have only been reasonably good for a few months (since circa Nov'25).

Previously, takes were necessarily shallower or not as insightful ("worked with caveats for me, ymmv") - there just wasn't enough data - although a few have posted fairly balanced takes (@mitsuhiko for example).

I don't think we've seen the last of hypers and doomers though.


> LLMs have only been reasonably good for a few months (since circa Nov'25).

Ironically this itself is one of the hyper/doomer takes.


Can you point to one other post like this? Curious. Thanks

It's actually common for human-written projects to go through an initial R&D phase where the first prototypes turn into spaghetti code and require a full rewrite. I haven't been through this myself with LLMs, but I wonder to what extent they could analyse the codebase, propose and then implement a better architecture based on the initial version.

Let's be real, a lot of organizations never actually finish that R&D phase, and just continue iterating on their prototypes, and try to untangle the spaghetti for years.

I recently had to rewrite a part of such a prototype that had 15 years of development on it, which was a massive headache. One of the most useful things I used LLMs for was asking it to compare the rewritten functionality with the old one, and find potential differences. While I was busy refactoring and redesigning the underlying architecture, I then sometimes was pinged by the LLM to investigate a potential difference. It sometimes included false positives, but it did help me spot small details that otherwise would have taken quite a while of debugging.


If you write that first prototype in Rust, with the idiomatic style of "Rust exploratory code" (lots of defensive .clone()ing to avoid borrowck trouble; pervasive interior mutability; gratuitous use of Rc<> or Arc<> to simplify handling of the objects' lifecycle) that can often be incrementally refactored into a proper implementation. Very hard to do in other languages where you have no fixed boilerplate marking "this is the sloppy part".

Rust is a language for fast prototyping? That’s the one thing Rust is absolutely terrible at imo, and I really like the production/quality/safety aspects of Rust.

It's not specialized to fast prototyping for sure, but you can use for that with the right boilerplate.

For me it’s just a matter of “does this actually save me time at all?”

If it generates the slop version in a week but it takes me 3 more weeks to clean it up, could I have I just done it right the first time myself in 4 weeks instead? How much money have I wasted in tokens?


I've been arguing that it's POSSIBLE to get a small (but meaningful) uplift in productivity on average if you are careful with how you use LLMs, but at the same time, it's also extremely easy to actually negatively impact your productivity.

In both cases, you feel super productive all the time, because you are constantly putting in instructions and getting massive amounts of output, and this feels like constant & fast progress. It's scary how easy it is to waste time on LLMs while not even realizing you are wasting time.


A car saves you time in getting to and from the store. But if you don't learn to drive, and just hop in the car and press things, you're going to crash, and that definitely won't save you time. Cars are also more expensive than walking or a bike, yet people still buy them.

I already know how to drive stick (trad coding), I don’t feel like I’m gaining much by switching to automatic transmission.

Yeah that's not the difference, lol. With AI coding you can get the same work done in an order of magnitude less time, without even knowing how to program.

The only comparison I can come up with is 3D printers, but even that's not as ridiculously fast and easy as AI coding. An average person can ask an agent to write a program, in any popular language, and it'll do it, and it'll work. We still need people intelligent enough to steer the agent, but you do not need to edit a single line of code anymore.


Most people that don't know how to program have no real desire in coding with AI (unless to pose as a SWE and get that sweet money). Most of them don't even like computers. Yes they do some tasks on it, but they're not that attached to the tool and its capabilities.

> does this actually save me time at all?

Soooooo....

As one who hasn't taken the plunge yet -- I'm basically retired, but have a couple of projects I might want to use AI for -- "time" is not always fungible with, or a good proxy for, either "effort" or "motivation"

> How much money have I wasted in tokens?

This, of course, may be a legitimate concern.

> If it generates the slop version in a week but it takes me 3 more weeks to clean it up, could I have I just done it right the first time myself in 4 weeks instead?

This likewise may be a legitimate concern, but sometimes the motivation for cleaning up a basically working piece of code is easier to find that the motivation for staring at a blank screen and trying to write that first function.


Well for me, the amount of time/effort as a function my of my motivation has acted as a natural gatekeeper to bad ideas. Just because I can do something with AI now doesn’t necessarily mean that I should. I am also weary of trading time and effort for outright money right out of my own pocket to find out, especially when I find the people I’d be giving money to so reprehensible. I don’t live somewhere where developers make a lot of money. I’m not poor in any stretch but not rich enough that I can waste money on slop for funsies. But I can spend a month on validating a side project because I find coding as a hobby enjoyable in and of itself, and I don’t care if I throw out a few thousand lines of code after a little while and realize I’m wasting my time.

Cleaning up agent slop code by hand is also a miserable experience and makes me hate my job. I do it already because at $DAYJOB because my boss thinks “investing” in third worlders for pennies on the dollar and just giving them a Claude subscription will be better than investing in technical excellence and leadership. The ROI on this strategy is questionable at best, at least at my current job. Code Review by humans is still the bottleneck and delivering proper working features has not accelerated because they require much more iteration because of slop.

Would much rather spend the time making my own artisanal tradslop instead if it’s gonna take me the same amount of time anyway - at least it’s more enjoyable.


Your position makes an immense amount of sense for your described situation.

As I said, I'm retired, and so I've never had to clean up AI slop at $DAYJOB.

Since the whole AI thing would be a learning experience for me, it would include trying to toilet train the AI itself, as others have intimated can be done in some cases, rather than dealing with a bunch of already-checked-into-the-repo-slop.

And that may be a losing proposition. I don't know; haven't tried it yet.

> Would much rather spend the time making my own artisanal tradslop instead if it’s gonna take me the same amount of time anyway - at least it’s more enjoyable.

Although I haven't had the AI experience you describe, I have had a similar experience with coworkers who moved fast and broke all kinds of shit. That was similarly no fun. It's like trying to work on your wife's minivan, but she won't pull over and let you properly fix it.

Given sufficient time, I enjoy polishing/perfecting/refactoring code. My final output often looks radically different from my prototype. It is clear to me that I would hate the situation you describe. It is not clear to me that starting with prompted slop and wrangling it into submission would be much less enjoyable to me than writing my own slop and then wrangling it into submission.

> especially when I find the people I’d be giving money to so reprehensible.

This is a bit of a concern, but I'm pretty sure that, at the moment, every token you burn costs them more than you.


Good code will stand the test of time. No matter how great a project is, there’s no world in which I adopt anything written over a few months and just released, for maintenance reasons alone.

> you need to learn how to use them correctly in your workflow and you need to remain involved in the code

I completely agree that this is the case right now, but I do wonder how long it will remain the case.


Without wanting to sound rude: I think the mistake people make with AI prototypes is keeping the code at all.

The AI’s are more than capable of producing a mountain of docs from which to rebuild, sanely. They’re really not that capable - without a lot of human pain - of making a shit codebase good.


It's a very accurate and relatable post. I think one corollary that's important to note to the anti-AI crowd is that this project, even if somewhat spaghettified, will likely take orders of magnitude less time to perfect than it would for someone to create the whole thing from scratch without AI.

I often see criticism towards projects that are AI-driven that assumes that codebase is crystalized in time, when in fact humans can keep iterating with AI on it until it is better. We don't expect an AI-less project to be perfect in 0.1.0, so why expect that from AI? I know the answer is that the marketing and Twitter/LinkedIn slop makes those claims, but it's more useful to see past the hype and investigate how to use these tools which are invariably here to stay


> this project, even if somewhat spaghettified, will likely take orders of magnitude less time to perfect than it would for someone to create the whole thing from scratch without AI

That's a big leap of faith and... kinda contradicts the article as I understood it.

My experience is entirely opposite (and matches my understanding of the article): vibing from the start makes you take orders of magnitude more time to perfect. AI is a multiplier as an assistant, but a divisor as an engineer.


vibing is different from... steering AI as it goes so it doesn't make fundamentally bad decisions

Both of these are not really the right way to use AI to code with. There are two basic ways to code with AI that work:

1. Autocomplete. Pretty simple; you only accept auto-completes you actually want, as you manually write code.

2. Software engineering design and implementation workflow. The AI makes a plan, with tasks. It commits those plans to files. It starts sub-agents to tackle the tasks. The subagents create tests to validate the code, then writes code to pass the tests. The subagents finish their tasks, and the AI agent does a review of the work to see if it's accurate. Multiple passes find more bugs and fix them in a loop, until there is nothing left to fix.

I'm amazed that nobody thinks the latter is a real thing that works, when Claude fucking Code has been produced this way for like 6 months. There's tens of thousands of people using this completely vibe-coded software. It's not a hoax.


#2 does not negate my steering suggestion, so I'm not sure how you can conclude nobody thinks it's a real thing that works

also Claude Code is notoriously poorly built, so I wouldn't tout it as SOTA


I have worked at companies from startups to fortune 500. They all have garbage code. Who cares? It works anyway. The world is held together with duct tape, and it's unreasonably effective. I don't believe "code quality" can be measured by how it looks. The only meaningful measure of its quality is whether it runs and solves a user's problem.

Get the best programmer in the world. Have them write the most perfect source code in the world. In 10 years, it has to be completely rewritten. Why? The designer chose some advanced design that is conceptually superior, but did not survive the normal and constant churn of advancing technology. Compare that to some junior sysadmin writing a solution in Perl 5.x. It works 30 years later. Everyone would say the Perl solution was of inferior quality, yet it provides 3x more value.


I hear you about "it just works" mattering infinitely more than some arbitrary code quality metric

but I'm not judging Claude Code by how it looks. I kinda like the aesthetics. I'm talking about how slow, resource hungry and finnicky/flickery it is. it's objectively sloppy


> when Claude fucking Code has been produced this way for like 6 months

And people can look at the results (illegally) because that whole bunch of code has been leaked. Let's just say it's not looking good. These are the folks who actually made and trained Claude to begin with, they know the model more than anyone else, and the code is still absolute garbage tier by sensible human-written code quality standards.


Yet it works anyway. What does that say about human code quality standards?

Human code quality standards are built around the knowledge that humans prefer polished products that work consistently. You can get away without code quality in the short term, especially if you have no real competitors - to a lot of people, there just aren't any models other than Anthropic's which are particularly useful for software development. But in the long term it gets you into a poor quality trap that's often impossible to escape without starting over from scratch.

(Anthropic, of course, believes that advances in AI capability over the next few years will so radically reshape society that there's no point worrying about the long term.)


Someone used Claude Code to generate a very simple staffing management app. The sort of thing that really wouldn't take that long to make, but why pay for any software when you can just ignore the problem, amiright? Anyway, the code that got generated was full of SQL injection issues for the most absurd sorts of things. It would have 80% of the database queries implemented through the ORM, but then the leftover stuff was raw string concat junk, for no good reason because it wasn't even doing any dynamic query or anything that the ORM couldn't do.

This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now

Is there evidence these groups are a minority? I mean, the OP sounds like they are taking the right approach but I suspect it requires both skill/experience and an open mind to take their approach.

Just because an approach has good use-cases doesn't mean those are going predominate.


Those extreme takes are taken mostly for clicks or are exaggerated second hand so the "other side's" opinion is dumber than it is to "slam the naysayers". Most people are meh about everything, not on the extremes, so to pander to them you mock the extremes and make them seem more likely. It's just online populism.

I'm sorry, who is this for? Aren't you maybe a little tired of talking about this, if not totally <expletive> bored??

There is something at this point kind of surreal in the fact that you know everyday there will be this exact blog post and these exact comments.

Like, its been literal years and years and yall are still talking about the thing thats supposed to do other things. What are we even doing anymore? Is this dead internet? It boggles the mind we are still at this level of discourse frankly.

Love 'em hate 'em I don't care yall need to freaking get a grip! Like for the love god read a book, paint a picture! Do something else! This blog is just a journey to snooze town and we all must at some level know that. This feels like literal brain virus.


This is exactly why I built https://github.com/andonimichael/arxitect . I’ve found that agents by default produce tactical but brittle software. But if you teach agents to prioritize software architecture and design patterns, their code structure becomes much much better. Additionally, better structured code becomes more token efficient, requires less context to make changes, and coding agents become more accurate.

Sad to see someone’s small business close, but these products are in a difficult position of being both extremely niche and very simple. Someone went to some effort to source a nice dongle enclosure and do some printing on it, but beyond that the hardware is something that anyone with a little PCB experience could replicate it in a day. I wouldn’t be surprised if these were just sourced from a generic manufacturer in China and they asked for custom printing.

If there’s demand this would be a good project for someone to make and have ready to build PCBs you could order from OSH Park or even a fully project that you could have JLC build and populate.


I have a tiny business that makes a gadget for musicians: Think something like an effects pedal. I publish my schematics, have shared PCB files, and even offer to give you some of the parts that are hard to get. A few people have built their own, and share their results in web forum threads about my product.

Very few people have taken the bait. I think we techies over-estimate the ability and inclination of people to make something. Even most programmers don’t want to solder. They may still be technically inclined, but want to be involved at a higher level: Buying the basic stuff and using it as a basis for even more elaborate things.


If you offered me schematics and PCB designs for a tool I desired, I might be /more/ inclined to give you money just to support you. Nothing to do with my ability or interest in DIY (I also design and sell electronics).

I love learning from and building that type of thing (plus have some musician friends who can never have too many gadgets). Would you be willing to share a link?

Note: if you don’t want to create a link to yourself, you could also email me at <my username>0_AT_protonmail_._com


>I think we techies over-estimate the ability and inclination of people to make something.

But that was never the worry. The worry is a competitor undercutting you because they do not have to recoup R&D.


A few tricks to beat this are community, brand, an app that only works with the official version (hard to pull off and sad), and going more niche.

> I think we techies over-estimate the ability and inclination of people to make something.

If you replace 'we techies' with 'most folks' (and especially with 'folks who didn't do a thing with their hands in thei$ lives') nothing would change.


This company and product line launched nearly 20 years ago (2007) and doesn’t seem to have changed much since. That’s quite a long time for something like this. If the owners had wanted the business to continue (perhaps they didn’t), some diversification could have achieved that relatively easily.

Your assumption here is that they wanted it to continue making money, rather than (for example) reacting to the influx of new orders from HA’s announcement by shutting it down. Perhaps a working source of revenue is being voluntarily terminated rather than having starved to death?

rel=nofollow is used to signal that links should not be used by search crawlers for authority calculations on most sites with user-submitted content, including Hacker News.

You basically have to use nofollow for comments otherwise your site becomes a big target for SEO link spam.


> This book was SO GOOD.

One of the (very valid, IMO) criticisms of the book is that the author tries to set herself apart from the culture she was deeply embedded within. I think it's becoming a trap to hold the author up as a hero when she was clearly part of it all to the very core. It was only after she got separated from the inner circle club that she tried to distance herself from it.

So while reading it, be careful about who you hold up as a hero. In a situation like this it's possible for everyone to be untrustworthy narrators.


We would have no book if the author was a hero: they would say "I'm not doing this," quit, and that would be the end of it. By this definition, only an unheroic person could've written it. By the same definition, an firsthand expose of Meta could never be written by a trustworthy person.

This obviously protects the company: you are ceding this ground to them, "No trustworthy person could work at your company and write an expose." I don't think we should cede that to them.


Why is it that the only people willing to testify against the cartel are murderers, drug dealers, and bank robbers? These are not trustworthy witnesses.

Same problem.


GP's point is not that only heroes should tell the tale, but rather that in this case the whistleblower was also an active part of the problem, but sought to distance herself from her then behavior by swapping it down instead for a more passive lack of situational awereness. That is, she reached for stupidity as an escape hatch from having to reckon with her own malice. And she's now being celebrated for it.

The lack of accountability paired with the celebration of the "hero" are the problem. Not the fact of her testimony.

EDIT: Some people who have similarly testified acknowledged the part they played in the situation they later denounced. So, it is possible for the story to be told and for the teller to also say "I knew what was up. I said nothing. I did nothing. I'm sorry."


Not really. Author could have whistleblown and quit early on revealing unsavory things.

The book is mainly attempts to embarass Zuck (eg, he’s sweaty, he’s not good at Catan, etc).


The fact that she did end up setting herself apart is what's remarkable. For every one of her who was able to self-reflect, become horrified of the ethics of what she was doing, and took the hard steps of stopping and breaking away, how many current and former Meta employees don't do this reflection and remain contributors to the problem? 1:100? 1:1,000? 1:10,000?

A few years ago I had a date with a backend engineer at Meta.

I asked if they'd ever considered the societal implications of the work they did. They said "Oh wow I've never even thought about it". Probably a solid hire from Meta's perspective.


I know an ex-Facebook employee who told me that "Nobody at Facebook ever makes a conscious decision about whether something is good or bad. You are given a metric, and your job is to make that metric go up. If it turns out that making the metric go up has negative consequences [for the business, I don't think it's anyone's job to worry about the rest of society], then somebody else is given another metric to ameliorate the negative consequences of you making your metric go up."

He didn't last all that long, he had a conscience. I've heard similar things, but not quite in such clear words, from several other people I know who have worked at Facebook/Meta.


I know couple of people who said exactly same thing. One of them is quite smart and I asked what was his/her personal opinion and I've heard: "I'd rather not talk about it ever again"

That's a really effective way to get a group of people to do horrible things. Break it up into small pieces where each one isn't that bad in and of itself.

Basically how a corporation is structured. The whole point is limited legal liability, so that the corporation as a whole can do things that would be blatantly illegal if any one person did them.

Governments too. The defining characteristic of a state is the monopoly on the legitimate use of violence. Some more recent theories on state formation come down to the state being the biggest bandit of them all, the one that subsumes and threatens to kill all other organized sources of violence, and hence becomes the "legitimate" one simply because it has eliminated all other contenders. One of the most popular courses at my college was entitled "Murder", and the syllabus was largely devoted to this tension between how the worst crime of all, when talking about individuals, is simply how states do business.


That sounds mysterious and important

Or you missed the eye-rolling sarcasm in the answer they have to give on every goddam first date.

Maybe I'm just a wacky Bleeding-Heart, but I don't think it's unreasonable to expect someone who worked on a product that amplified hate, leading up to a massacre in Myanmar, to at least address that without sarcasm while getting to know them.

Maybe it’s a question for the third date?

Getting to know the views and values of your date is not a weird thing to do on the first date. If it’s a question that annoys them, they should consider why.

Imagine dating someone who works at Facebook, though. I can't imagine who would be so utterly dense as to offer so presumptuous a complaint, but he'd better be at least a 13 out of 10 or I'm not even bothering to pretend to go to the bathroom and then sneak out the back.

You sound fun to date

That can only be a sarcastic answer, don’t you think?? You really believe people would get a job at former Facebook, after lots of scandals have been exposed, and not even think about that?? Sorry no way.

You seriously underestimate how myopic people can be. Not everyone is a sophisticated and socially aware HN commenter like you and I.

Did you read the book? Because that's not the story, she way too many opportunities to do that, yet didn't. Only after she stopped getting paid, she did an "expose".

I hate facebook more than the next guy but this person just helped Facebook to accomplish usual evil things, and only stopped once she cannot profit. I'm pretty sure she didn't start that way or maybe even saw it that way, but objectively (in her own narrative, if you only take actions and ignore her own emotional justifications) that's what happened.


She didn’t set herself apart. She was fired. She was forced apart.

That’s the issue here. Is this someone who found their morals or someone who found a stick with which to strike back at those who hurt her?

One of those doesn’t require her to change at all.


Even if she was fired it was an act of courage and a step in the right direction to write a book about it. The company is cancer, no wonder they named it Meta.

Courage guided by righteousness or vengeance? I feel like the motivation is very important here.

I don't particularly think so. What matters is whether the stuff in the book was true, not whether the author is of unassailable character

When one's goal is to look for any reason to downplay facts, questioning the character of the messenger is a standard tactic.

Indeed it is. One can't help but wonder why such a well-known distraction tactic still remains so effective against so many

Could be courage guided by a paycheck. Would not be surprised if the publisher did not reach out directly to suggest writing a book.

How is it courageous? She’s profiting off her book. Seems pretty normal.

Are all these comments written by meta AI bots?

If we require every whistleblower to be a saint, then we’ll never hear a whistle. If you have a serious criticism of their credibility, that’s potentially different, but arbitrary criticisms of someone’s moral worth is mostly irrelevant.

The fact that someone actively worked against the welfare of society as a whole, in significant and impactful ways, _is_ a criticism of their credibility. It speaks to their morals and empathy for others.

It doesn't mean that what they're saying is a lie, but it puts them firmly in the bucket where what they say needs to be verified.


It doesn't matter if she's as bad as the others. The message is that the others are bad. Pointing out that she's also bad is meek at best.

You are missing the entire point

> The message is that the others are bad

The message is that they're bad and the fact that they did these bad things proves they're bad.

And the key thing here is that we need to decide if we believe "they did these bad things". If the person reporting them is well known as someone the is truthful and trustworthy, we're likely to believe them with little proof. If the person reporting them is well known as a bad person that does things to harm others for their own benefit... we're less likely to believe them until we can verify the truth of their statements.

You're completely skipping over the "is this person telling the truth" part; I assume because they're saying things that fit in with your pre-existing view of the world. And that's not a good thing.


A strange response.

Rather than address the comment you change the subject, “whaddabout the author!”

Why do the dark work of deflecting on behalf of “Meta”?

(lol, that name gets me every time. Might as well have renamed themselves NoIdeaWhatToDoNow)


Because recognizing the author as conflicted and an unreliable narrator changes how you should weight and consider the information they are providing. It doesn't necessarily mean anything is untrue - but it does add extra, valuable information to how much you trust it.

If someone tells me something, I'm mostly likely to believe it without further investigation. But not always.


Another one. Deflecting the criticism of Meta with a “whaddabout the author!”

Formed as an answer to a question, but not one that was asked.

A different account than last time, though, so I’ll ask you too: Why do the dark work of deflecting on behalf of Meta (lol)?


I think the point is that up until she was fired, she was Meta. She wasn’t a random employee, she was their global public policy director. She wasn’t just implementing policy, she was responsible for creating it.

The question remains whether or not she would have written this book had she not been fired.

It’s not like she quit due to her ethical objections


The question does indeed remain, but is it a question whose answer matters?

If someone exposes a shady organization why should I care if they did it for ethical reasons or for something less noble like revenge for getting kicked out of that organization?


>> but is it a question whose answer matters?

I think it does? "scummy person loses job, finds another way to cash in" almost seems to becoming a trope? I think it raises questions about what is left _out_ of the book, not just what's in it - are the issues raised the worst/most important, or just the ones that will sell the most books? Did we really need someone to 'tell us' meta/social media can be evil?

There are reasons that (some) criminals are not allowed to profit from books/movies about their crimes.

Anyway, that's just my general feelings about this sort book - I've never heard of the book or the author. And I honestly have no interest in reading it. Based on what I'm reading here - that would basically be rewarding/enriching one of the 'bad actors' ?


Because it doesn’t really target the issue.

Would she go do the same job at Alphabet? X? Probably, if they’d have her.

And the only real thing that’d happened is the government has been used to remove other companies’ competition.

Hooray I guess


> but is it a question whose answer matters

Yes. 100%. And the fact that you're not seeing why it does is confounding to me.

This person has shown that they are willing to harm society (for their own benefit, presumably); by active choice. And, as such, anything they say needs to be viewed through the lens of "is this person lying for their own benefit".

1. Their previous actions do mean that we should not trust what they are saying outright, we should do (more) work verifying the information they provide.

2. Their previous actions to _not_ mean we should avoid holding other accountable when the information provided turns out to be true.

You're asking your question like someone is arguing that this person's information doesn't matter (2); but the point being made is that we should (1).


> The question remains whether or not she would have written this book had she not been fired.

Assume the answer is no. What does this change about any of this?


She’s attempting to use the public to bludgeon Meta.

This is a fight among shitty people. I will not lionize either side. They both contributed to the shitty state of affairs today.

Meta can burn and she can go broke. I’m fine with both


That's a good reaction to have.

Thankfully she wrote the book so we know about all these bad deeds.


I don’t think the book revealed anything new about FB’s bad deeds, though. Was there novel info?

I think its just more exposure for already bad things.

Had she had a trove of emails or something, I might thing differently.

This is quite different from the recent lawsuits that produced novel material and evidence.


Knowing something is happening and reading detailed descriptions of them actually occurring is different, IMO. I learned things I didn't know while reading it, at least.

A third “whaddabout the author”!

It’s almost as if…


Many of the juicy stories from the book have no supporting evidence other than the claims of the author. Their credibility is all we have to go on here. If someone wrote a message here saying that they were a fly on the wall at the publisher’s office where they had a workshop inventing these stories to sell more books, you’d be right to question their motives.

Even justice system considers the trustworthiness of a witness, evaluating incentive, conflict of interest.

Having worked in another FAANG, I realize a large number of criticisms do come from imaginations, since I could see the contrast first hand. Nobody could tell exactly the consequences of all actions, most of the time it's just a buncha folks trying to figure out what to do, experimenting, iterating. Have you tried executing a conspiracy, like a surprise party? Good luck keeping a secret with more than 5 people.

There's also the problem of perspective. To a less technical engineer who don't know what they don't know, having their deliverable rejected time and again could feel like a conspiracy against them. If you read a blog post from them you'd think the culture is very toxic when everyone is doing their best juggling to be considerate while keeping the quality high.

As with others commenting on this, I've no idea how true the book is, in fact I have never read it. OTOH, even without the book, researches saying social media is making teenagers depress look convincing to me, and, although it's a losing battle, privacy matters a lot to me so I've personally stopped using social media for many years.

None of these give me full confidence to trust nor distrust the narrator, for things that you can't observe externally. It's all percentage.


The fate of every whistleblower

I believe what Sarah Wynn-Williams wrote in Careless People.

I also think she's shown herself to be a person I'd want to stay away from.

The reason this matters to me is because the more media attention Ms. Wynn-Williams gets, the more her ideas of what we should do about Meta will spread and be given credence. The more she will be given credence outside of simply reporting what she saw. I can both believe what she says and think it's best to stop fanning the flames and giving her personal attention.

This entire saga reads to me as intra-elite fighting: Ms. Wynn-Williams is representing the cultural/educational elite, and obviously the Meta execs are the tech elite. As an ordinary person, I'm not under any delusion that either side has my best interest in mind when they fight, or when they advance policy, regulatory, or other suggestions. The derision and disdain Ms. Wynn-Williams has for people not in her milieu throw up a lot of red flags for me.

It comes down to believing that Ms. Wynn-Williams wants to hurt Meta, not to help us.

I also believe that blindly supporting people or organizations just because they also hate people or organizations you hate is a very bad idea. The enemy of your enemy can still be your enemy. In this case, regarding technological politics, Zuck and co. want us to become braindead addicted zombies, and Ms. Wynn-Williams will want us to have no control or access at all, because we can't handle it and it's for our own good. She's from the cultural group pushing for things like age restriction and verification, devices you can't root/restricting what you can install on your own device, etc. Both are bad. One sees us as cattle and the other sees us as toddlers.


I think the view you put here is legitimate, Greenwald had an article about Frances Haugen kind of saying the same sorts of things

To all future whistle-blowers: Please ignore comments like this one! What you are doing is a valuable service to society.

Yeah, the fact that she realized what's going on and still worked tirelessly to give Mark / Facebook more negotiating power speaks volumes. I also can't buy the whole "I have financial woes and can't escape" spin that she puts on her situation.

Otherwise, great book.


“I only make $4M/ year in RSUs and am an attorney, however will I pay for daycare for my three kids and teacher husband. I better continue acting unethically and profiting from hosing people.”

This was such a weird argument. I think the author may actually be self-deluding herself as I can’t imagine her or her editors think anyone buys this argument.


I don't know if anyone is holding the author up as a hero, least of all herself. The book reads as a masterclass in grooming, manipulation and abuse.

If anything, the title "Careless People" does a disservice to its message: the people above and around her clearly knew exactly what they were doing, and took great care to evade any and all responsibility for anything.


> In a situation like this it's possible for everyone to be untrustworthy narrators.

Even if you take her as trustworthy narrator (which I mostly did) she's stil evil in this story up until the publishing book.


I haven't read the book, but I don't think there's anything dishonest about needing distance to see the context of what she was a part of. Now, if she is trying to paint herself as completely outside of that even while she was knee-deep in it, that's a different matter but hindsight isn't something to be dismissed.

There is nuance here though. Taking a step back and learning from an experience is something to be celebrated.

So in this way we should dismiss all whistleblowers?

You read the book. Did she have the receipts or not?

Sometimes you can still learn from a story.

Humans are about making mistakes and learning from them, not hiding behind the disease of perfectionism.

If there's something the author needs to say, I'm sure they are capable of using their words.

The other side that could have happened so easily is so much silence that there was no book.


You are probably right that she was part of it all. There what money and power do to you. We need to limit it. The eat the rich stuff is the wrong messaging but the right goal. We need to reduce concentration of wealth and power.

> I think it's becoming a trap to hold the author up as a hero

Cool, then don’t do that.

Every single employee at Meta is still vile and making the world a worse place every single day, and anything exposing the depths of their shittiness, no matter the source, is a good thing.


Waking up from a cult doesn't make you a hero, but stopping the cult might.

> Your employment agreement can include stuff like “if you say anything bad about us, even to your family in your own home, you owe us $50,000”.

Non-disparagement clauses are limited by the law, which in the United States is augmented by state-level restrictions. There have been some recent developments from the NRLB limiting how severance agreements can be attached to non-disparagement clauses, too.

So it's not generally true that you can be liable for $50K for saying anything bad about your employer in your own home.

The situation with this author is on the other end of "in your own home" spectrum: They went out and wrote a whole book against their employer that violates NDAs, too. Regardless of what you think about Meta or the author, this was clearly a calculated move on their part to draw out a lawsuit because it provides further press coverage and therefore book sales (just look at all the comments in this thread from people claiming they're motivated to go buy it it now). Whether the gamble pays off or not remains to be seen.


Overall this feels a good thing for public, even if the author is money oriented, because this will hopefully make even more details public.

I personally have no qualms about one criminal extorted by another, specially if their fued is making world better for everyone.


Huh, I didn't think of that. If you are aware of the Streisand Effect, it is only logical to use it to your own advantage. Just like Cunningham's Law, you can often get the right answer by posting the wrong one. In fact, this is probably the first time someone is knowingly using the Streisand Effect to their own advantage. There are no prior examples.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: