Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But, isn't the point of Yagni that there is not such a large continuum? Isn't the argument that attempting to anticipate the future to any degree is generally a waste of time?

That is the argument, but that doesn't make it true.

I don't know that it requires making "architectual decisions" as much as following generally good design principles and keeping your couplings reasonably loose.

Aren't these going to give much the same result in practice? But if the YAGNI advocates are correct and we can't usefully predict the future at all, how can you judge things like where to put your module boundaries and limit coupling?



>That is the argument, but that doesn't make it true.

My comment was addressing its parent's assertion that Yagni fits on a "continuum". It's a "kind of pregnant" notion in my view. Either you're following Yagni or you're not.

>Aren't these going to give much the same result in practice?

It's probably more the same in theory. But, in practice, it's the difference between, say, simply minding separation of concerns versus attempting to develop a full "framework" based on attempts to predict the future, then implementing the solution to the current problem in that framework.

>how can you judge things like where to put your module boundaries and limit coupling

Concepts like DRY, loose coupling, encapsulation of business logic, MVC, and other design principles stand independent of any particular application.

For instance, I can't recall an application I've developed wherein it wasn't clear where to separate responsibilities (i.e. impose boundaries) for the current problem. These separations are generally applicable to future iterations.


Either you're following Yagni or you're not.

That much I agree with. What I would dispute is the implication, typical of many pro-YAGNI posters in this thread and elsewhere, that the alternative to YAGNI is somehow diving in and developing everything up-front without reference to the relative risks of requirements changing vs. incurring additional costs by delaying. That is a false dichotomy.

For instance, I can't recall an application I've developed wherein it wasn't clear where to separate responsibilities (i.e. impose boundaries) for the current problem. These separations are generally applicable to future iterations.

I suspect this is where our experience differs. To me, one reasonable guideline for modular design and separation of concerns is that each module should roughly correspond to a unit of change, in the sense that a change in requirements would ideally affect a single module without interfering elsewhere in the system. However, if your basic premise is that you can't tell anything in advance about what your future requirements might be, you might model the current known situation in all kinds of different ways, but some will be much more future-proof than others.

Consider the old chestnut of modelling bank accounts. If you only have to model a balance on a single account, you can have some data structure that stores the balance and some functions to increase or decrease it. As soon as you need to model transfers between accounts, it turns out that the above is a very unhelpful data model, and your emphasis on single accounts was a poor choice. Even the most basic assumptions about likely future applications would have led to a more useful path, but if you follow YAGNI you have to start with a single-account model and then follow an onerous migration procedure precisely when you need to work with multiple accounts for the first time.


>implication...the alternative to YAGNI is somehow diving in and developing everything up-front without reference to...

I don't think that's the dichotomy that's being presented, nor do I think it would matter if it was. That is, one doesn't have to go to that extreme to incur the downside of a "non-YAGNI" approach. It's very easy to have contemplation of future features negatively impact a project.

>one reasonable guideline for modular design and separation of concerns is that each module should roughly correspond to a unit of change, in the sense that a change in requirements would ideally affect a single module without interfering elsewhere in the system

Wow. I think that's exceedingly difficult to pull off and trying to design in such a way itself seems tremendously burdensome to the project out of the gate. It also seems that it would tightly couple the code with business requirements in such a way that change actually guarantees maximum impact to the code. Because, rules don't change in a neat, stovepiped way. So, when they change, cross-cut, overlap, etc., then all of your modularization goes right out of the window.

So, interestingly, given that approach, it probably would make it more important to anticipate future changes, because your code will be less insulated from those changes!

>modelling bank accounts

Thanks for bringing this down from the abstract.

But, this is where generally good design can help. If you have your debit and credit functionality neatly encapsulated, plus a good overall model for chaining/demarcating transactions within your app, etc., then you don't need to rip apart your entire model to support transfers. In fact, I'd say you have a good head start.


It's very easy to have contemplation of future features negatively impact a project.

But it's also very easy to have failure to contemplate future features negatively impact a project. This doesn't get us anywhere.

I think that's exceedingly difficult to pull off and trying to design in such a way itself seems tremendously burdensome to the project out of the gate.

But again, you have to design some way, unless you're proposing literally a totally organic design where even things like basic modular design are considered completely unnecessary unless justified by changes required right now. As soon as you are designing some specific way, you are necessarily making choices, and I would argue for making the best choices you can given the information you have available at the time. That may mean you don't have enough confidence in some particular requirement to justify working on it yet, or it may not.

If you have your debit and credit functionality neatly encapsulated, plus a good overall model for chaining/demarcating transactions within your app, etc., then you don't need to rip apart your entire model to support transfers.

But where did that good overall model you mentioned come from if you weren't anticipating potential future needs to some extent?


>But it's also very easy to have failure to contemplate future features negatively impact a project.

Perhaps, but in the former case you guarantee an impact to the project. And, empirically speaking, the odds are that impact will be negative. Trying to design for some unknown future is more likely to get you off the rails than designing well to known requirements.

>unless you're proposing literally a totally organic design where even things like basic modular design are considered completely unnecessary

Well, "modular" is such an amorphous word. Building a domain model and other functionality around current requirements will yield some degree of modularization. I'm suggesting that such modularization should tie back into the actors and objects dictated by current requirements, and is more at the programmatic level. It's horizontal (relative to requirements). This, as opposed to a vertical approach that attempts to stovepipe individual use cases into modules. The latter scenario can lead to more pain when requirements change.

I sincerely believe that may be why you find it so important to anticipate future changes--because you've totally pegged your design to a modularization scheme that demands your requirements stay within neat boundaries. So, it's really important that you define those boundaries well from the outset, or you may face some serious re-work.

>But where did that good overall model you mentioned come from if you weren't anticipating potential future needs to some extent?

But that's really my point: good design practices in themselves do anticipate the future to a significant extent. That is, a system that is well-modeled with good separation of concerns is more flexible, less coupled and thus, more extensible. But, one need not anticipate any specific future requirements to achieve this. Just design well based on known-requirements.


I wonder whether we're still slightly talking at cross-purposes here. You seem to have the idea that I am somehow advocating always trying to anticipate or emphasize future requirements at the expense of what I'm doing right now, or that I'm arguing for some sort of fixed architecture or design where you need to magically know everything up front. This is certainly not what I'm trying to say. On the contrary, I adapt designs and refactor code all the time, just as many others here surely do.

But I still feel that there is something rather selective in the arguments being made for any near-absolute rule about not taking expected future developments into account when designing. Whenever we talk about modular design or domain models or use cases, and whichever words we happen to use, we are always implicitly talking about making decisions of one kind or another, as evidenced by the fact that even those who are supporting YAGNI in this HN discussion are advocating things like refactoring to keep new work in good condition.

Of course we can and should revisit those decisions later if we have better information and of course sometimes we will change things as a result. The only point I'm trying to make here is that I'd rather start from the position most likely to be useful, not the minimal position. To me it is all about probabilities, and perhaps unlike some here, I don't find that predicting what I'm going to need my code to do next week is some inhumanly challenging task that has a 105% failure rate with project-ending costs. On the contrary, the vast majority of the time on a real project I find things will in fact turn out exactly the way we're all expecting on those kinds of timescales, and probably pretty close a month out, while looking six months or two years ahead we probably have at best a tentative plan and just like the YAGNI fans we don't want to invest significant resources catering to hypothetical futures with a high probability of changing before we actually get there.

In this context, I find it is often premature pessimisation and willful ignorance making the work that will almost always turn out to be unnecessary, not the other way around. The idea that I should discard knowledge of what is almost certainly coming next week and create more work for myself on Monday just in case something totally unexpected happens by the end of this week is bizarre to me. Maybe we've just worked on very different kinds of projects or had very different standards for the management/leadership who are making decisions on those projects.


Well, when you start bringing the timeline in as close the following week, then you're talking about something very different (or at least vs. what I've been considering). Because, even with iterative/agile development, a week out can likely be considered in-scope (or close enough).

So, narrowing the timeline so dramatically fundamentally changes this entire discussion. YAGNI's statement implies that there's a reasonable degree of uncertainty with regard to the features you're considering. However, there is generally much less uncertainty about what you'll be building in the following week. The requirements are essentially known for all intents and purposes. So, at that point, it's more like, "I Know We Need This", because you're essentially driving it from the current requirements.

So, I think the real determinant is whether you're looking at current requirements, vs. trying to anticipate future requirements. If you're doing the latter then, I think we just disagree.

And, if your experience is that extrapolating requirements far off into the future has been helpful on average, then we have definitely worked on different kinds of projects.


Well, when you start bringing the timeline in as close the following week, then you're talking about something very different (or at least vs. what I've been considering).

Perhaps this is where the crossed wires happened, then.

To me, this is all a matter of degrees. I know exactly what I need to build immediately -- what code I'm writing this morning, what tests I'm currently trying to make pass, or however you want to look at it. I also have a very clear idea of what I need to build later this week. I have some idea of what I need to build by the end of the month. I have a tentative idea of what I'll need to build in three months. On most projects, I probably have very little confidence in what I'll be building a year from now.

When I'm designing and coding, the amount of weight I give to potential future needs is based on that sliding scale of confidence. If I'm writing two cases for something right now and expect to need the third and final possibility next week, I'll probably just write them all immediately, so that whole area is wrapped up and I don't have to shift back to look at this part of the code again in the immediate future. For something that I will probably need in a few weeks, it's less likely that I'll implement it fully right now, but I might well leave myself a convenient place to put it if and when the time comes if that doesn't require much work. For me, the latter is sometimes a bit like deliberately leaving the final step of a routine task unfinished at the end of a working day so I can get going the next morning with a quick and easy win -- it's as much about the positive mindset as any expectation that doing something immediately vs. very soon will make any practical difference to how well it gets done.

Obviously as I look further ahead, confidence in specific needs tends to drop quite sharply on most projects. For something tentative that is being discussed as a possible future requirement for later this year but with no clear spec yet, it's unlikely that I would write any specific code for it at all at this stage. However, I might still take into account likely future access patterns for data in my system if I'm choosing between data structures that are equally suitable for the immediate requirements. I might take into account the all-but-certainty that we will need many variations of some new feature when planning the architecture for that part of the system and design in a degree of extra flexibility, even though we have no clear idea of exactly which variations they will be yet and I'm not actually writing any concrete implementations beyond the first one at this stage.

And, if your experience is that extrapolating requirements far off into the future has been helpful on average, then we have definitely worked on different kinds of projects.

That's not really how I look at it. I'm not so much extrapolating (a.k.a. guessing) specific requirements far ahead. I'm just allowing for the possibility that I may have some useful knowledge about future needs without necessarily having all the details yet. If I do, I will take advantage of that to guide my decisions today to the extent that confidence justifies doing so. The amount of actual change in designs or code that results will vary with both the level of confidence I currently have in whatever potential requirements we are talking about and in the assessment of how much effort is required to allow for them now vs. how much effort will potentially be saved later if the expectation is accurate.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: