This article really interests me - mainly because over the last 10 years, I've sort of adopted (independently) a mish-mash of what here is described here as the 'clean architecture', domain-driven design, and the 'ports and adapters' approaches. I have employed these concepts on a dozen or so SaaS products, and the resulting schemes have have worked extremely well, and have been rock-solid frames to work from.
However... I think there's a risk in adopting these types of models without having some experienced context in them. Software design has a huge amount of 'art' to it. And, it's so easy to reach the complexity tipping point - where it all come crashing down. For instance, I remember back in the mid-90s when the design patterns book was all the rage. And, shortly thereafter I'd find code just jam-packed with factories and builders and observers and adapters in multiple layers. It was a mess. Similarly, I can see someone reading these and then going out and creating a 'devices' namespace, a 'controllers' namespace, etc. and filling them with all too many classes and interfaces. I can even predict the rise of the 'framework' maven archetype that creates a bunch of extraneous excess.
Listen, I've been there and done that, and it just doesn't work well. It just creates a lot of architectural noise. It creates a mess for your development teams. I've learned this the hard way. Trust me.
What I recommend is starting very, very slowly. Just like when DI was the rage, and everyone would leap to integrate the most popular DI engine with all its xml configuration files and heavy weight complexity. When, in fact, all you really need at first is a couple of interfaces and a builder class in the entry-point. Don't set out to create an architectural behemoth.
That said, I still think a solid 'domain model' should be at the core of it all. If anything, spend a lot of time defining the domain. Work with stakeholders (product managers, subject matter experts, managers, field and support, even developers, etc.) to get it down. Write unit tests that help make the domain very fluent. In my opinion, it's better to have a small core of very well defined domain classes, then trying to boil the ocean. Start small and focus on the quality of the entities.
Again, this is just from my trial-by-many-errors experiences.
Many of these architectures are focusing on keeping the domain model away from frameworks and libraries. And I see considerable pushback on HN and elsewhere towards this idea. I find this trend anti-intellectual (you do want to get better at your craft, don't you?), and contributing to the rapid commoditization of development: settling on "Good Enough" architecture (cough, Rails) so devs can be easily interchangeable.
What's alarming is that our languages have gotten far more powerful, but we seem to be reluctant to learn concepts from the past to further propel our understanding of how we solve problems. It seems that improvements in thinking only come when we're force-fed 'progress' by a well-marketed framework with sufficient clout.
I think it's a backlash against complexity, and also some of the backlash is because there is a naiveté that we have outgrown (anytime you outgrow naiveté you look back on your former foolishness with some resentment). Not to say that these architectures are completely bad, by any means. But these 4 architectures can "remind" us of 2 problems:
(1) Times when incidental complexity derailed us (whether we ourselves were the misguided ones who pushed it, or whether someone else inflicted it upon us is beside the point), and
(2) Times when we thought some architecture or idea was an amazing insight, and then later we found out it was only good due to other limitations in our environment/toolset/mindset. (For example, "design patterns" seemed like the greatest insight ever to the Java programmers at the time, but eventually everyone realized they many of them were just workarounds for Java's limitations. They were elegant in a world of single-inheritance OO, but not that insightful at all for functional programming, or languages with closures, or data-driven imperative programming).
So it's a "once bitten, twice shy" feeling. And an anti-hype protection cloak.
Note: I'm not saying these architectures are never a good solution. I'm sure there are cases where they are a great fit. But they still remind us of these issues, so they still trigger a bit of a backlash.
Thanks for posting this, as I was going to do write more or less the same thing.
Over the years (nearing two decades at this point) I've increasingly put emphasis on reducing complexity, instead of increasing flexibility (which is what most "architectures" aim to do).
Software doesn't fall down (usually) because of a lack of flexibility. It falls down because it becomes nearly impossible to reason about the system as it becomes more complex.
In the old days we adopted this idea that refactoring code was always going to be nearly impossible. So we used plugins, really insane separations of concerns, and lots of decoration all to combat that boogeyman. All at the cost of any hope of building a sensible mental model out of our systems.
It's not that "architecture" is bad per-se, it's just that most of these approaches create instant complexity and it's just not needed (usually). Instead finding sensible patterns and taking advantage of simple language-level constructs to maximize reusability are the only tools you really need.
Just out of curiosity, have you used any of these specific architectures on any real world projects and failed? If so, would you mind sharing your experience?
I'm currently reading up on DDD and thus far one of the main goals is to reduce complexity so you can easily reason about the software (reduce the mental load, understand the domain). Though I'm still in the early chapters of the book, thus far it has constantly reinforced this idea via layered architecture, modularizing code, aggregates, etc. There's nothing DDD specific about these ideas. I believe DDD is just about proper OO design.
I've seen projects that regretted going down those roads. I can't say they failed because of it (sad truth be told, several of them failed because they were in such a dysfunctional corporate environment that they were doomed before the project even started).
CQRS can go wrong if you try to do only 90% of it instead of 100% of it. A crude analogy: consider a nation choosing to change the side of the road its cars drive on. So the US wants to switch over to driving on the left. Well, it pretty much needs to be a universal switch, or there are going to be ugly problems. CQRS is sort of like that. It changes so many assumptions that "percolate" through the whole architecture, that it's pretty much an all or nothing thing. So if there is any dissension amongst the team, you're likely to end up in a 90% (or less) situation and start having problems.
Plus, CQRS is complex, especially if you try to do it with a relational database. And the complexity is something you will have to carry, and it will weigh you down. (The ever-present caveat -- there are situations where CQRS can be a good approach).
DDD is interesting. It has so much appeal to it. Parts of it almost read like developer-porn. And there is a lot of wisdom there. However, it too brings complexity. It is not "the simplest thing that could possibly work." Rather, it suggests you and your team really invest some mental cycles projecting the future and trying to design a domain object model that will suit your business for a long time.
That analysis could be valuable. It's hard to argue that thinking ahead is a bad idea. However, many teams, in the process of performing that deep DDD analysis, have lost their way and gone overboard with unnecessary complexity. Maybe the flaw is with developers' innate vulnerability to the siren song of complexity moreso than with DDD itself. And I'm sure many have succeeded with DDD. But many teams have lost their way with it too, and usually by letting complexity get out of hand.
I think when we're not under overbearing management constraints, that's what all good developers do. "No battle plan survives first contact with the enemy" and usually no architecture survives first contact with reality (unless you force it to).
Meet my "friend", the Architecture Astronaut. They're a developer just like you and suddenly your team leader. They're about to bury your common sense approach six feet deep. :(
Although, I still slightly prefer them to my other "friend", the Procedural Bulldozer.
I fully agree with the risk that certain ideas will be overused. This leads to unnecessary complexity. I also remember the times of the GoF book and the design patterns rage.
The ideas behind those architectures require time to think through. The sooner you start, the better. Then, slowly, try adopting them. If you have a side project, then apply it over there.
It's crucially important to have a common understanding of the project within a team. The domain knowledge should come from the subject matter experts, but after that it should be a team work to decide which (if any) patterns and architecture should be applied.
Common understanding of the project... yes that is essential. I've got a tangental role on a project right now where that is NOT happening, and predictably the system is a mess. Each developer is being given task assignments by the lead but nobody really understands the overall architecture, goals, and design of the system. I'm actually not sure the lead does either, it's really one of those "make it up as you go" projects. Sadly it'll ultimately be scrapped and have been a waste of time for everyone involved, except to the extent they can learn some lessons from it.
It's a peopleware problem, more than a technical one. If there's no trust in the whole team, then all the design/architecture challenges are less important.
I've just read it today, a good post on this topic:
"We all know Conway’s law: “any piece of software reflects the organisational structure that produced it.”. In the same way, communications patterns between people affects the quality of the software. After 6 months working on a greenfield project I realised I could link all the areas with major technical debt to some unsolved personal conflicts in the team."
I hate to be self-promotional, but if you are interested in Uncle Bob's Clean Architecture, I would love it if you took a look at Obvious Architecture (http://obvious.retromocha.com). It's a working implementation of his ideas in Ruby, but the same structure can easily be done in any OO language. Actually, the implementation would be a lot easier in a language like Go or Scala that has things like type checked interfaces built in.
First rule: When you write text and introduce an acronym (e.g. DDD), please define what it means to the readers. This is an article with multiple acronyms but no definitions for them.
Thanks for the very interesting article. Sorry if its a sill question but I'm a little bit confused. As someone who is barely starting with node.js, I'm curios if this would be a good architecture for a node.js app, or am I getting this wrong?
Sorry if the question is not clear, I can clarify if necessary :)
Thanks for posting this. I'm halfway through Eric Evans book on DDD and have two others in my queue. I find it strange that there's very little discussion about DDD here on HN.
Reading Eric Evan's DDD book was an epiphany for me. A less ambitious introduction to DDD is InfoQ's Domain Driven Design Quickly. It's available as a free ebook, too:
Where does JPA fit into Clean Architecture? Seems like annotation-based persistence straddles several boundaries, or else you code your entities multiple times as both "real" entities and DTOs.
However... I think there's a risk in adopting these types of models without having some experienced context in them. Software design has a huge amount of 'art' to it. And, it's so easy to reach the complexity tipping point - where it all come crashing down. For instance, I remember back in the mid-90s when the design patterns book was all the rage. And, shortly thereafter I'd find code just jam-packed with factories and builders and observers and adapters in multiple layers. It was a mess. Similarly, I can see someone reading these and then going out and creating a 'devices' namespace, a 'controllers' namespace, etc. and filling them with all too many classes and interfaces. I can even predict the rise of the 'framework' maven archetype that creates a bunch of extraneous excess.
Listen, I've been there and done that, and it just doesn't work well. It just creates a lot of architectural noise. It creates a mess for your development teams. I've learned this the hard way. Trust me.
What I recommend is starting very, very slowly. Just like when DI was the rage, and everyone would leap to integrate the most popular DI engine with all its xml configuration files and heavy weight complexity. When, in fact, all you really need at first is a couple of interfaces and a builder class in the entry-point. Don't set out to create an architectural behemoth.
That said, I still think a solid 'domain model' should be at the core of it all. If anything, spend a lot of time defining the domain. Work with stakeholders (product managers, subject matter experts, managers, field and support, even developers, etc.) to get it down. Write unit tests that help make the domain very fluent. In my opinion, it's better to have a small core of very well defined domain classes, then trying to boil the ocean. Start small and focus on the quality of the entities.
Again, this is just from my trial-by-many-errors experiences.