I call it Tradcoding. Not using AI for anything. (You just copy-paste from StackOverflow, as our forefathers once did ;)
I also have two levels "beneath" vibe coding:
- Power Coding: Like power armor, you describe chunks of code in English and it's built. Here you outsource syntax and stdlib, but remain in control of architecture and data flow.
- Backseat Coding: Like vibe coding but you keep peeking at the code and complaining ;)
I feel like this distinction isn't made often or clearly enough. AI as a superpowered autocomplete is far more useful to me than trying to one-shot entire programs or modules.
Agreed. I'd also add that I have varying levels of watchfulness: paid work I inspect (and understand) every line, and iterate. JS for my blog, I inspect. Throwaway scripts, I skim.
I dunno; I think Tradcoding would go beyond regular modern coding, and rather imply some kind of regressive Nara Smith "first grind and sift the flour in your kitchen"-style programming.
No Internet connection, no cache of ecosystem packages, no digitized searchable reference docs; you sit in a room with a computer and a bookshelf of printed SDK manuals, and you make it work. I.e. the 1970s IBM mainframe coding experience!
I did something kinda like that when I realized I worked way better when I disconnected my internet. So I had to download documentation to use offline. Quite refreshing honestly.
Not necessarily more efficient, but it feels healthier and more rewarding.
This isn't terribly far from "Knuth-coding" to call it something - imagining the program in WEB in its purest form and documenting what it does, almost irregardless of the actual programing language and how it is done.
I would probably just call it hand coding, as we say we use hand tools in wood working. Many do this for fun, but knowing the hand tools also makes you a better woodworker.
It's an interesting question: Will coding turn out to be more like landscaping, where (referring to the practice specifically of cutting grass) no one uses hand tools (to a first approximation)? Or it will it be more like woodworking, where everyone at least knows where a Stanley hand plane is in their work shop?
Can't wait to sell my artisinal hand-crafted software at the farmer's market.
Humor aside, long-handed programming is losing its ability to compete in an open market. Automate or be left behind. This will become increasingly true of many fields, not just software.
That's actually a great point: judging by the dev team's commits at work there's an unprecedented amount of code being committed but it's not actually making it into releases any faster. Maybe the same thing is happening at my various vendors, but then that kind of argues against the idea that Everything Has Just Changed.
> “Autonomous Proxies for Execration, or APEs,” Pluto said.
> “By typing in a few simple commands, I can spawn an arbitrary number of APEs in the cloud,” Pluto said.
> “I have hand-tuned the inner loops to the point where a single APE can generate over a megaBraden of wide-spectrum defamation. The number would be much larger, of course, if I didn’t have to pursue a range of strategies to evade spam filters, CAPTCHAs, and other defenses.”
“Have you tried this out yet?” Corvallis asked.
“Not against a real subject,” Pluto said. “I invented a fictitious subject and deployed some APEs against it, just to see how it worked in the wild. The fictitious subject has already attracted thousands of death threats,” he added with a note of pride.
“You mean, from people who saw the defamatory posts seeded by the APEs and got really mad at this person who doesn’t even exist.”
Make a fictitious subject with all the traits of the person you really want to attack (Subject X). Have your social media bots attack Subject X. Anger spillover on social media will begin attacking your true target by trait association. The real target will have a difficult to impossible time coming at you via legal channels as there is no direct association.
I am ape writing this post after ape cooking breakfast, and then I'll go for an ape walk. In the future, maybe by Thursday, I can have agents do all of that and relax.
It's not ape coding. It's skill coding. People who don't have the skill to do math and logic ask others to do it for them.
The reason we have programming languages is the same reason we have musical notation or math notation. It is a far more concise and precise way of communicating than using natural languages.
We could write music using natural language, but no one does because a single page of music would require dozens of pages of natural language to describe the same thing.
It's funny that you mention music and notation: sheet music is very compact for musical absolutes like pitch/rhythm/harmony, but a huge part of what we care about with music is nuance, which doesn't reduce cleanly to symbols. Hence there are plenty of words in musical notation that try to describe the desired characteristics of performance, that can't be otherwise encoded into that notation. For example, "with feeling".
That reminds me of an argument on here a while back: where I said I wished Spotify let you filter tracks by presence of pitch-correction or autotune. This wasn't because I thought autotune was 'bad' or modern artists were 'fake', but because sometimes I wanted to listen to vocals as a raw performance - intonation, stability, phrasing - I wanted the option of listening to recordings that let me appreciate the _skill_ possessed by the artists that recorded them.
I got _absolutely destroyed_ in that comments section, with people insisting i'm a snob, that I'm disrespectful, bigoted towards modern artists, there's no way i can actually hear the difference, and if i cant why does it even matter, and anyway everyone uses it now because studio time is expensive and it's so much cheaper than trying to get that perfect take. People got so angry, I got a couple of DMs on Twitter even. All the while I struggled to articulate or justify why I personally value the _skill_ of exceptional raw vocal performance - what I considered to be performance "with feeling".
But, I had to come to terms with the fact that anyone can sing now - no-one can tell the difference, so the skill generally isn't valued any more. Oh, you spent your entire life learning to sing? You studied it? Because you loved music? Sorry dude, I dunno what to say. I guess you'll have to find another way to stand out. You could try losing some weight. Maybe show some skin.
Self evidently not the case, look at people absolutely falling over themselves to pay hundreds for seats at West End/Broadway shows just to see the spectacle of live human performance.
Actually, learning to sing was never really valued. Anyone can learn to sing, but for most that means being a backing singer. Being a lead/soloist is more about timbre and presence (including to a not insignificant extent looks). It's something you either have or you don't.
> It is a far more concise and precise way of communicating than using natural languages.
No. We have programming languages because reading and writing binary/hexadecimal is extremely painful to nigh on impossible for humans. And over the years we got better and better languages, from Assembly to C to Python, etc. Natural language was always the implicit ultimate goal of creating programming languages, and each step toward it was primarily hindered by the need to ensure correctness. We still aren't quite there yet, but this is pretty close.
Natural language is natural because it's good for communicating with fellow humans. We have ways to express needs, wants, feelings, doubts, ideas etc. It is not at all "natural" to program a computer with the same language because those computers were not part of the development of the language.
Now, if we actually could develop a real natural language for programming that would be interesting. However, currently LLMs do not participate in natural language development. The development of the language is expected to have been done already prior to training.
Invented languages and codes are used everywhere. Chemical nomenclature, tyre sizes, mathematics. We could try to do that stuff in "natural" language, but it would be considered a serious regression. We develop these things because they empower us to think in ways that aren't "natural" and free our minds to focus on the problem at hand.
Natural languages are "natural" because they evolved as the de facto way for humans to communicate. Doesn't need to be with fellow humans, but humans were all we've been able to communicate with over our ~300,000 years of existence as a species. And we've done it in thousands of varieties.
> currently LLMs do not participate in natural language development
It's quite literally what LLMs are trained on. You create the core architecture, and then throw terabytes of human-generated text at it until a model that works with said text results. Doesn't matter if it participates in language development or not, it only matters that humans can communicate with it "naturally".
> Invented languages and codes
All languages are invented; the only difference is how conscious and deliberate the process was, which is a function of intended purpose. Just look at Esperanto. Or Valyrian.
A natural language is a living thing. Every day each speaker adjusts his model a tiny bit. This has advantages but also some serious disadvantages which is why technical writers are very careful to use only a small subset of the language in their writing.
For true natural language programming we'd need to develop a language for reliably describing programs, but this doesn't exist in the language, so why would it exist in the LLM models? It will never exist, unless we invent it, which is, of course, exactly what programming languages are.
Natural languages are not invented. Written scripts are said to be invented, but nobody says a natural language like English or French is invented. It just happened, naturally, as the name suggests.
If natural language were the end goal then mathematics and music would use it too. There's nothing stopping them.
> For true natural language programming we'd need to develop a language for reliably describing programs
We really don't. Eventually we won't even be programming anymore per se. Consider communicating with someone who isn't fluent in any language you know, and vice versa. In the beginning you need to use a pretty restricted vocabulary set so you understand each other, similar to a programming language. But over time as communication continues, that vocabulary set grows and things become increasingly "natural", and it's easier for you to "program" each other.
Same with LLMs. We just need to get to the point where a model has sufficient user context (as it already has all the vocabulary) for effective communication. Like OpenClaw is currently accessing enough context for enough use cases that its popularity is through the roof. Tell it to do something, and as long as it has access to the relevant tools and services, it just gets it done. All naturally.
This is why I never use a calculator. Since my school days I have the skill
to do long division. Why hit the sin button when I have the skill to write out a Taylor series expansion?
For many other purposes I have the skill to use Newton Raphson methods to calculate values that mostly work.
Those who use a calculator simply don't have these skills.
There is a notable difference between say, calculating long division through a calculator compared to prompting an AI to calculate the derivative of a simple continuous function. one requires _understanding_ of the function, while the other just skips the understanding and returns the required derivative.
One is just a means to skip labor intensive and repetitive actions, while the other is meant to skip the entire point of _why_ you are even calculating in the first place. What is the point of dividing two numbers if you don't even understand the reason behind it ?
I'm not quite sure I understand the logic of this and how people don't see that these claims of "well now everyone is going to be dumber because they don't learn" has been a refrain literally every time a major technological / Industrial Revolution happens. Computers? The internet? Calculators?
The skills we needed before are just no longer as relevant. It doesn't mean the world will get dumber, it will adapt to the new tooling and paradigm that we're in. There are always people who don't like the big paradigm change, who are convinced it's the end of the "right" way to do things, but they always age terribly.
I find I learn an incredible amount from using AI + coding agents. It's a _different_ experience, and I would argue a much more efficient one to understand your craft.
100%. I have been learning so much faster as the models get better at both understanding the world and how to explain it me at whatever level I am ready for.
Using AI as just a generator is really missing out on a lot.
Integration and differentiation, even before LLMs, were already something that you would be better off just getting a machine to do in most cases. It's far more important to understand what the operations represent than it is to derive the exact closed form of the result yourself, because the actual process of doing it is almost always tedious and mechanical and doesn't give you much insight into the equation you are working with.
But, because the numbers that get returned aren't always the right numbers, I try to approximate the answer in my head or with paper and pencil to kind of make sure it's in the ball park.
Also, sometimes it returns digits that don't actually exist, and it's pretty insistent that the digit is correct. If I catch it early I just re-run the equation but there is a special button where I can tell it that it used a digit that does not actually exist.
Sometimes, for complex ones, it tells me it's trying to calculate and provides some details about how it's going about it and keeps going and going and going, for those ones I just reboot the calculator.
Solution for a hallucinating calculator: get a second unreliable calculator to verify the work of the first one. This message brought to you by a trillion dollars in investment desperately trying to replace the labor force with pseudo-intelligent calculators.
Also, the calculator may refuse to process certain operation deemed to be offensive or against the interest of the corporate-state.
Not to forget, the calculator consumes so much processing power that most people are unable to run it at home, so you need a subscription service to access general-purpose calculation.
You probably also don't use a calculator because it uses a scary language called arabic numerals. Why write 123,456 when you could write out in english: One Hundred Twenty-Three Thousand Four Hundred Fifty-Six? English is your programming language and also your math language, right?
LLMs are able to ingest numbers. And not just Arabic numerals; Did you know that there are other kinds of number systems?
Believe it or not, they also ingest multimedia. You don't need the English language to talk to a language model. Anything can be a language; you can communicate using only images.
And for that matter, modern LLMs are great at abstract math (and like anything else the results still need proofreading).
Bad analogy. The things I delegate to a calculator, I'm absolutely sure I understand well (and could debug if need be). These are also very legible skills that are easy to remind myself by re-reading the recipe -- so I'm not too worried about skills "atrophying".
People in this thread discussing the merits of the satire seriously are completely missing the joke over their head that the entire thing was meant to be just a setup for the Rewrite It In Rust punchline.
I mean it is true to say that most people in the West now use a car for transport, and walking has become more of a leisure pursuit (rebranded as "hiking") rather than a practical necessity.
My car is typically used twice a week and (like many others) I mostly ride my bike or walk. I'm not special at all and I certainly use the car, but it has not replaced walking.
I don't think either of us disagree though that the number of miles of non-leisure journeys walked per capita is significantly less in 2026 than it was in 1926 or 1826 though?
I always found it pretty remarkable in David Copperfield when Dickens recounts regular walks between London and Canterbury, which he apparently did make in real life
> The central view of ape coding proponents was that software engineered by AIs did not match the reliability of software engineered by humans
That's not the reason to do ape coding. AI generated code is not innovative. If you want to build something that no one has built anything similar to then you have to ape code.
That's just not true. It's like saying compiled code couldn't be innovative, that the only innovative code is assembly. People used to say stuff like that too, in their fear of being replaced. There's nothing new under the sun, I guess [double entendre].
>The main value of modern ape coding appears to be recreational. Ape coders manifest high levels of engagement during coding sessions and report feelings of relaxation after succeeding in (self-imposed) coding challenges. Competitive ape coding is also popular, with top ranked ape coders being relatively well-known in their communities.
I have never been paid to write code, and my formal CS education is limited to AP Computer Science, and a one-credit Java class in college.
I wrote 20 years ago a backup script implementing Mike Rubel's insight <http://www.mikerubel.org/computers/rsync_snapshots/> about using `rsync` and hard links to create snapshots backups. It's basically my own version of `rsnapshot`. I have deployed it across several of my machines. Every so often I fix a bug or add a feature. Do I need to do it given `rsnapshot`'s existence? No. Is it fun to work on it? Yes.
(I've over the years restored individual files/directories often enough from the resulting backups to have reasonable confidence in the script's effectiveness, but of course one never knows for certain until the day everything gets zapped.)
It's pretty strange to me that we imagine a world where AI can handle every problem but we still talk about code. It's like how the Jetson's had bulky TVs.
You don't talk about all the assembly high level languages make, or at least it's no longer how people view things. We don't say "look at this assembly I compiled." Instead the entire concept fades to the back.
The issue is you're measuring this statistic incorrectly.
If you look at the per capita number of people talking about assembly when looking at all the people on the planet it's highly likely there are more people looking at assembly now then whenever your back then was. Programmers simply where a tiny part of the population back then.
Each time we make coding easier and more high level we invite more programmers into the total pool.
> You don't talk about all the assembly high level languages make, or at least it's no longer how people view things.
Speak for yourself. I routinely look at assembly when worrying about performance, and occasionally drop into assembly for certain things. Compilers are a tool, not a magic wand, and tools have limits.
Much like LLMs. My experience with Claude Code is that it gets significantly worse the further you push it from the mean of its training set. Giving it guidance or writing critical “key frame” sections by hand keep it on track.
People who think this is the end of looking at or writing code clearly work on very different problems than I do.
> You don't talk about all the assembly high level languages make, or at least it's no longer how people view things. We don't say "look at this assembly I compiled." Instead the entire concept fades to the back.
"Aping in" in crypto means (meant?) buying crypto without doing any research.
I know it's not what the thought piece is about, but it's equally accurate to say engineers are "aping in" on AI coding without doing any research. Very much the same vibe, my anti-AI friends suddenly flipped their tune to shill slopped together apps.
I expect it to go about as well as it did in crypto.
NGL the flaw in this piece is the same flaw in every "AI will replace X" argument - it assumes the bottleneck was ever the typing.it wasn't. the bottleneck is knowing what to build and why.I use AI agents for probably 80% of my code output now and IMO i'm more productive than ever, but only because i spent years "ape coding" first and can immediately tell when the agent is heading somewhere stupid. the people i see struggling with AI coding are exactly the ones who skipped that part. tbf the calculator analogy that keeps coming up in this thread is backwards - nobody is arguing you don't need to understand math to use a calculator. that's literally the point. you DO need to understand it, which is why "ape coding" isn't going away as some niche hobby.it's the prerequisite
I would call it code-plumber. It's like a plumber who are today socio-economocally very distinct from architects, civil and structural engineers.
They will have very narrow to zero understanding — don't need it to fix — of shear forces, navier stokes.
They will command high rates if labor is limited(a plumber in Indonesia will commande lower ppp adjusted hourly rates than America). CS education become a subset of applied math since graduate hiring of code-plumber will require a narrower certificate to fix an AI system — which works very much like how plumber working to fix a building leak is different from a person fixing a water pipe burst under a road.
A few AI systems will become dominant, That should be a mix of Anthropics and your Googles. They will hire code plumbers to plumb together all the things they provide.
You don't have to use much brain at all as a code-plumber. You become a remote journeyman logging in and plumbing with given tools, making sure there is low back pressure(a term where load on future plumbers interacting/fixing with ai decreases) and the like.
I can't tell if yourr comparison to plumbers who don't understand theory (Navier-Stokes) is supposed to apply to "ape coders" who write code by hand or to "vibe coders" who outsource their understanding.
Ape thinking is a cognitive practice where a human deliberately solves problems with their own mind. Practitioners of ape thinking will typically author thoughts by thinking them with their own brain, using neurons and synapses.
The term was popularized when asking a computer to do it for you became the dominant form of cognition. "Ape thinking" first appeared in online communities as derogatory slang, referring to humans who were unable to outsource all their thinking to a computer. Despite the quick spread of asking a computer to do it for you, institutional inertia, affordability, and limitations in human complacency were barriers to universal adoption of the new technology.
I really like to understand the practice of software engineering by analogy to research mathematics (like, no one ever asks mathematicians to estimate how long it will take to prove something…).
Something I think software engineers can take from math right now: years of everyone’s math education is spent doing things that computers have always been able to do trivially—arithmetic, solving simple equations, writing proofs that would just be `simp` in Lean—and no one wrings their hands over it. It’s an accepted part of the learning process.
Naturally, when hobbyists spend a lot of time in a single block to get results (as they are unable to parallelize or meaningfully coordinate over multiple invocations of themselves, due to lacking key cognitive capabilities such as embeddings), they refer to it as 'going ape'.
Producing code that does what's intended. The metric is fuzzy and based on the usage of the software, not the scale of lines of code. The extent of the importance of the code itself is that I'm practice software tends not to be "one and done", so you need to be able to go back and modify it to fix bugs, add features, etc., and it turns out that's usually hard when the code is sloppy. Those needs still should stem from the sandal actual user experience though, or else we've lost the plot by treating the mechanism as the goal itself
Would my user rather have a program that works 100% in 2 weeks, or a program that works 80% in one day?
When the user needs a change made, would they prefer I spend another two weeks extending my perfect program, or throw a few LLMs at their sloppy code and have it done in a day?
That would depend on who your users are and what they're using their program for. My point is that the context of who is using the program, how they're using it, and what they're using it for are what actually matters, because most of the time, software that no one uses is by definition useless. There are circumstances where that might not apply, like code used as part of education or training (whether in a formal course or a project someone writes specifically because they're trying to learn something from the process) or when the purpose is aesthetic or humorous, but I'd argue that whatever process makes sense for them doesn't necessarily bear any resemblance given how different the goals are.
You're really asking if a user would want a program that fails a fifth of the times?
In some cases it might be better to have some crap right away and more cheaply, but even you would probably not like a 20% failure rate in most of the software you use.
And you sound like someone in 1995 saying "everyone will have their own webpage!" which is kind of true but also note how many people actually make their own custom website. Even for people who do have their own "custom" website, they usually use WordPress or an existing static-site generator with a theme they found and tweaked a little.
Websites are manually put together things. I'm talking about something that runs automatically in the background and you don't even think about it. Our computers today are 10,000 times faster than what we had in 1990. Now extrapolate into the future. 100 years from now someone will be able generate a custom OS at today's level as a toy in minutes or less.
The speed of light disagrees with you unless subatomic computing pops up at some point.
Or to put this a different way, they won't be developing a whole cloth custom OS but more like customizing a linux kernel to do what they want. The reason is there is some minimum of problem space exploration required, hence entropy generation needed, to understand the interactions of the hardware and it's limitations.
Hence if you wanted to run this 10,000,000X faster computer in the future to do what you are saying, it would explode like a supernova with the energy concentration required to do it quickly.
TL;DR it's a few trillion times more energy efficient not to do this.
It's amusing to me that folks think nanometer size is the only way to increase density. Also your math is quite a bit off. 10,000 times more compute than a current average desktop is only 10,000 times more energy ... today. Let alone decades from now.
I shall now drive my fart car back to my cozy meatcave from the public meatspace so that I can do some good old ape coding with my smelly carbon-based friends in peace.
Maybe the LLMs today are deeply flawed and cannot replace programmers. But, one day, LLMs (or some other AI approach) _will_ be successful in replacing programmers. It might not be this year or the year after.
I do however feel pretty confident in saying that there will be few programmers in 2076. This piece will look quite prescient.
It's just like how we say "can you imagine programming on a punchcard?"
i don't understand the stance of the post and it being the first in the blog (congrats on getting this hot on your first post) I am unable to further investigate.
Is it sci-fi like writing from the perspective of a future person?
It sounds like someone trying to make assumption sounds as fact. Not a fan.
It is presented as a Wikipedia article from the future describing a subculture of tomorrow. See also https://qntm.org/mmacevedo for another example of this genre.
That "punchline" seems just a final argument in support of the thesis (that manual coding is becoming absurd, and only people as dumb as apes will insist on doing it).
Why do people think "agent coding" is a skill? There is not a single programmer who is "unable to program with agents". It's like saying Albert Roux was unable to heat up a ready meal in a microwave.
> This is meant to insult AI skeptics, let's not pretend to be idiots.
Only an idiot would read the piece in that way.
>It should be flagged and taken down.
Even if it really did "insult AI skeptics" (and, again, no one with any reasonable ability to comprehend wit and satire would take it that way), how is that justification to get it "flagged and taken down"?!?
>Despite the quick spread of agentic coding, institutional inertia, affordability, and limitations in human neuroplasticity were barriers to universal adoption of the new technology.
Blaming lack of adoption purely on regressive factors follows the same frame that AI firms set. It isn't very effective satire for that reason.
It couldn't be that there is something essential and elementary that is wrong with the output, no... all these experienced experts are just troglodites and wrong and we should instead tag along with the people who offloaded the parts of their work they found tough to a machine the first chance they got.
There's no such thing as ape coding. There's still just coding, and vibe coding.
I don’t think it was meant that seriously. I read it as a humorous fiction written as if in the future, and I thought it was funny. Even speaking as a primate.
When someone so clearly misses an article written tongue in cheek and uses personal insults to let us know they missed the point, one begins to wonder. Apes code together. Apes stronger together. Return to monke.
Why has nobody mentioned yet how dangerous this really is? Have we all forgotten the great Datacenter burnings of 2031? The APEs are one step away from becoming fully fledged Luddite terrorists. Artisanal software is unamerican just like President Barron said the other day on his Twitch stream.
I always thought that ape coding is what we call vibe-coding nowadays. Maybe the write of the article (maybe an ai generated blog?) misunderstood the terms.
"Humans are now writing code in strict specification language so that AI agents have completely context and don't mistakes. This specification language is called C' and has led to a whopping 20% reduction of code. 1000 of C++ code can be expressed in no more than 800 lines of specification C' code written by humans"
WTF is this?! Sattire? AI generated propaganda? I honestly don't get it. Can OP elaborate why it's a good content worthy of people’s time? Thanks in advance.
I enjoyed reading it. Whether one believes the future will look like this fictional/hypothetical one, it encourages the reader to think about what would need to become true for this future to be plausible.
Who knows? 5 people? 10? Only those who actually read it, and still not sure. Did they read it? Or did they also believe it's written by AI? I tried to believe it's written by a human when noticed its footer's note. It was hard to believe knowing my fear of today's trends, where many read is an empty dark where human time is voided. Yet, what is the main idea behind it then, nowadays, when just a few will actually read it?
Considering, how some modern attitude works for certain people, and how much power of trends and socials may offer, such terms get boosted over... and you just hope and keep believing in people...
Related: https://medium.com/@nathanladuke/b56da64a09ee (To Those Who Comment Their Opinion Without Reading the Whole Story... I was shocked at how many people simply read the title and then posted their opinion on the whole article...)
Yes, I understand what you're saying perfectly. And I had similar thoughts while I was writing this. I do not want to talk too much about the process of writing it, or the content itself, because I feel it's not right for me (the author) to talk about it. But I'd like to make it clear that I wrote this myself, and that many of the questions and points people have raised here have also been in my mind, and it was my intention to elicit this type of thinking. Thank you and all others for the comments - I really appreciate it, even the very negative ones. This is the first time I published something online and I'm very happy that it resonated with people.
Racism sucks and I'm bothered by it tremendously. For example the dog whistles in bored ape yacht club were obnoxious to say the least. But I don't think this is that. This is a silly satire on the ways people are getting tripped up on a fallacy, taking the concept of "ai" as being an autonomous force separable from people way too seriously. It's not of course. It's another iteration of the same old tools.
I also have two levels "beneath" vibe coding:
- Power Coding: Like power armor, you describe chunks of code in English and it's built. Here you outsource syntax and stdlib, but remain in control of architecture and data flow.
- Backseat Coding: Like vibe coding but you keep peeking at the code and complaining ;)
- Vibe Coding: Total yolo mode. What's a code?
reply