The abstraction is consistent though, and familiarity is a good thing when navigating a codebase which has N amount of other devs pushing to it every day.
I practise TDD for peace of mind - if I add new functionality to existing code I can be 99.9% sure I haven't made any regressions. When a client's system goes down on a friday, I can 99.9% guarantee it wasn't my code that is at fault. If I have to work at the weekend to update a production server, I'm 99.9% sure it'll go smoothly as my tests say it will.
I can actually write entire features with appropriate test coverage from the ground up and they work first time and have close to zero defects in production.
It's amazing when you spend 5-6 days writing code that does nothing and at the last moment, everything slots together with a few integration tests and wham, feature done. Not talking trivial stuff here either; big integrations across several different providers/abstractions, bits of UI, the lot.
You see a lot of people arguing against this but I'm going to be honest, they churn out a lot of stuff that doesn't actually work.
> You see a lot of people arguing against this but I'm going to be honest, they churn out a lot of stuff that doesn't actually work.
My anecdata cancels out your anecdata. The TDD practitioners that I've met have, without exception, written code that worked fine for only the one case that they've tested. Example: They'd test a method for sending a message with the string "hello". Turns out the method didn't URL-encode the message before POST-ing it, and sending anything with a space was broken. They were confident and pushed the change.
Not saying you're wrong, just that TDD doesn't seem to work for everybody, and can even be a distraction.
That's an integration test really. The clients all have an abstraction around the http endpoints so nothing touches integration in unit tests. The advantage of this is you deal with transfer objects only in the code, no HTTP which would violate separation of concerns.
I use HttpMock myself in test cases which fires up an http server for personal projects. We use Wiremock commercially.
1) Write a test that runs the service and saves output to a file.
2) Mock out the call to just return the data from the file and validate results.
3) If you need variations on this data just modify the file/data (often as part of the test)
I usually leave number 1 in the code but disabled since it often relies on remote data that may not be stable. Having the test run more than once is not very beneficial but being able to run it later and see what exactly has changed is great.
In this case, what's the difference if you write the test before or after though? You would still be covered. I don't lean in either directions in this argument, just curious to understand.
The difference is night and day - writing tests first means you write 'testable code' from the beginning. Following the red, green, refactor mantra means that for every change to your code, you already have a failed test waiting to pass. The result is your test cases make a lot more sense and are of a superior quality.
To liken it to something you may be familiar with - when commenting your code, do you think it's better to add comments in as you write the code? Or add in the comments at a later date after the code is all written? I'm sure you immediately know which approach results in better quality commenting, and it's the same with TDD.
> To liken it to something you may be familiar with - when commenting your code, do you think it's better to add comments in as you write the code? Or add in the comments at a later date after the code is all written? I'm sure you immediately know which approach results in better quality commenting, and it's the same with TDD.
Not to take the analogy too far, but usually when writing a chunk of code I can keep it's behaviour in my head for a good amount of time and find it's best to add comments at the "let's clean this up for production" phase when you can take a step back and see what needs commented. If you comment as you go, you'll have to update your comments as the code changes and sometimes throw comments out which is a waste of time.
Likewise with tests, I'm not saying write them far into the future, but I think having to strictly stick to red/green/refactor is going to waste time. What's wrong with writing a small chunk of code then several tests when you're mostly happy with it? Or writing several tests at once then the code?
People just don't write comments or tests after, that's the problem. If you do then that's fine, but after trying both routes I actually find TDD to feel like less work - not having to wait on large build times and manually navigating the UI actually makes for a more fun experience. Instant feedback being the fun part. Additionally writing tests 'after' always feels like work to me and I end up hating it, especially when I didn't write it in a testable way to begin with.
> People just don't write comments or tests after, that's the problem.
Doesn't that get caught in code review anyway though? I find being forced to write tests first can be clunky and inefficient. Also, I've worked with people who insist on the "write the minimum thing that makes the test pass" mantra which I find really unnatural like you're programming with blinkers on. TDD takes the fun out of coding for me sometimes.
Generally I'd rather sketch out a chunk of the code to understand the problem space better, figure out the best abstractions, clean it up then write tests that target the parts that are most likely to have bugs or bugs that would have the biggest impact.
I find when you're writing tests first, you're being forced to write code without understanding the problem space yet and you don't have enough code yet to see the better abstractions. When you want to refactor, you've now got to refactor your tests as well which creates extra work which discourages you from refactoring. When the behaviour of the current chunk of code you're working on can still be kept in your head, I find the tests aren't helping all that much anyway so writing tests first can get in the way.
What you describe is the typical mindset against TDD, it's difficult to explain the benefits, and really you just have to experience them for yourself. Changing your mindset is difficult, I know, why change what works right? My only tip is to keep an open mind about it, as TDD benefits are often not apparent to begin with, they only come after a couple of days work or weeks or months later or even years later.
You find that you need to do less mental work, as your tests make the required abstractions apparent for you. 'the minimum thing that makes the test pass' ends up being the complete solution, with full test coverage. Any refactoring done is safe from regressions, because of your comprehensive test suite. And when other colleagues inevitably break your code, you already have a test lying in wait to catch them in the act.
> Any refactoring done is safe from regressions, because of your comprehensive test suite.
As much as I like the idea of TDD, I have a problem with this part. When some refactoring is needed, or the approach changes, it seems like you have two choices. One is to write the new version from scratch using TDD. This wastes extra time. The other is to refactor which breaks all the guarantees you got before. Since both the code and the tests are changing, you may lose the old coverage and gain extra functionality/bugs.
And unfortunately in my experience, the first version of the code rarely survives until the deployment.
I'm not sure what approach you've described here, but it isn't TDD. In the case of adding new features to existing code, as you are continually running tests you will know straight away which you have broken. At this point you would fix them so you get all green again before continuing. In this way you incrementally modify the codebase. Remember unit tests are quite simple 'Arrange, Act, Assert' code pieces, so refactoring them is not a time sink.
Also some refactorings are easier with tests, some are harder.
The kind @viraptor mentiones is the kind that spans more than one compoment. For example when you decide that a certain piece of logic was in the wrong place.
The kind of refactoring that becomes easier is when you don't need to change the (public) API of a component.
Take for example the bowling kata. If you want to support spares and strikes and you need extra bookkeeping, that's the easy kind of refactor where your tests will help you.
But if so far you have written your tests to support a single player and now you want to support two players who play frame by frame... Now you can throw away all the tests that affect more than the very first frame. (yes in the case of the bowling kata, you can design with multiple players in mind, but that's a lot harder in the real world when those requirements are not known yet)
> What you describe is the typical mindset against TDD, it's difficult to explain the benefits, and really you just have to experience them for yourself. Changing your mindset is difficult, I know, why change what works right? My only tip is to keep an open mind about it, as TDD benefits are often not apparent to begin with, they only come after a couple of days work or weeks or months later or even years later.
I've been forced to follow TDD for several years and also been given the same kind of comments to downplay any reasoned arguments against it which I find frustrating to be honest. I don't see why the benefits wouldn't be immediately apparent.
> You find that you need to do less mental work, as your tests make the required abstractions apparent for you. 'the minimum thing that makes the test pass' ends up being the complete solution, with full test coverage. Any refactoring done is safe from regressions, because of your comprehensive test suite. And when other colleagues inevitably break your code, you already have a test lying in wait to catch them in the act.
You can do all of the above by writing tests at the end and checking code coverage as well.
"Any refactoring done is safe from regressions, because of your comprehensive test suite. "
With the right tests this works great. I have also seen the opposite where a test suite was extensive and tested the last details of the code. Then the refactor needed more time to figure out what the tests are doing than the actual refactoring. As often, moderation is the key to success.
Unit tests should follow a simple 'Arrange, Act, Assert' structure and test one single thing, described in it's title. I agree anything too complicated starts to defeat the point, especially when we are mainly after a quick feedback loop.
> it's difficult to explain the benefits, and really you just have to experience them for yourself. Changing your mindset is difficult, I know, why change what works right? My only tip is to keep an open mind about it
Maybe writing code for exploration and production should be considered separate activities? The problem with these coding ideologies is that they assume there is only one type of programming, which is BS, the same as assuming a prototype is the same as a working product.
What's exploratory programming though? Unless you're writing something that's very similar to something you've written before and understand it well, most programming involves a lot of exploration.
Well, UX prototypes for one. In research, most projects never go into production, those that do do so without researcher code. Heck, even in a product team, if you are taking lots of technology risks in a project, you are going to want to work those out before production (and it isn’t uncommon to can the project because they can’t be worked out).
Not really. It makes you commit to an API upfront, this is the exact opposite of what exploratory programming should be (noncommittal, keep everything open).
No with TDD you don't need to go in with a structure in mind, the structures arise as you write more tests and get a proper understanding of what components you'll require. Red, green, refactor - each refactor brings you closer to the final design.
That's the mantra often quoted but it always makes me think of the famous Sudoku example from Ron Jeffries. Basically as a mantra it falls down if you don't understand the problem domain. It's popular because it works for the sort of simple plumbing that makes up a lot of programming work. This problem is particularly true for anything creative you're trying to express as the requirements are often extremely fuzzy and require a lot of iteration.
If you don't know how to solve a problem you actually need to do some research and possibly try a bunch of different approaches. Over encumbering yourself with specific production focused methodologies hurts. If you're doing something genuinely new this can be months of effort.
After the fact you should go back and rewrite the solution in a TDD manner if you think it benefits your specific context.
That really isn’t exploratory programming. The end result should be code that you throw away en masse (it should in no case reach production). Otherwise, production practices will seep in, you’ll become attached to your code and the design it represents, hindering progress on the real design.
When I was a UX prototyper, none of my code ever made it into production.
>I find when you're writing tests first, you're being forced to write code without understanding the problem space yet and you don't have enough code yet to see the better abstractions.
That's why it's better to start with the highest level tests first and then move down an abstraction level once you have a clearer understanding of what abstractions you will need.
Can you do that with TDD though? Why not just sketch the code out first before you start writing tests?
I find TDD proponents don't take into account that writing tests can actually be really time consuming and challenging, and when you've got a lot of code that is tests, refactoring your tests becomes very tedious.
>do you think it's better to add comments in as you write the code? Or add in the comments at a later date after the code is all written?
Define "all written". If we are talking about a new function - obviously you write you comment for it after the function ready to be commented on. And obviously you won't be commenting every string you put there, right?
Now, if we are talking about the whole new feature, that can consist of many functions and whatever - yeah, you usually comment your code in the process of writting the feature, rather than doing it at a later time, which will never come.
I also find when following red, green, refactor that you end up producing more targeted unit tests that are more expressive of the code you are testing.
Trying to write unit tests afterwards lands me with something that appears as more of an afterthought or add on. It doesn't have to be this way I suppose, but it is more prone to.
This might be because I am more used to the red, green, refactor method though.
I also practice TDD, but with a different 'T' - Type Driven Design. I find them much easier to reason about with types and safer (you can't compile your code if it doesn't pass the type check). Just model your data as ADT and pattern matching accordingly.
Of course, types alone can't represent every error cases out there (especially the one related to number or string), so I still write Unit Test for those cases. But the number of Unit Tests needed is much lower.
Visual tests are more general, and are more akin to putting up barriers on either side of a bowling lane so the bowling ball stays within it's lane (with room to move about still). For example when using Angular, you write 'Page Objects' that have methods such as .getTitle(), .clickListItem(3) and so on, and can then write assertions to make sure the UI changes as expected by inspecting properties [1].
I usually find I build a general page object first ('this text is somewhere on the page'), then write the UI, then make the test more specific if I can after (but it's an art, as too specific and you risk creating too many false negatives when you make UI changes).
(Also as you are interacting with the UI, these would be known as integration tests.)
I don't think you can unit test GUIs, since by their nature all tests end up being integration tests. It's easy if you assign non-css (i.e. use a data-* attribute for identification instead of id or class since you want to keep those variable for stylesheet refactors) identifiers and just hard code the assumptions into the tests, like "when x is clicked y should be visible", or "when I enter 'foo' into the text field, the preview label should contain 'foo'". Ideally your assumptions about GUI functionality shouldn't change much throughout the lifetime of the project, and if you use static identifiers your tests should hold up during extensive refactoring.
To a certain degree you can unit test GUIs with tools like Ranorex or Selenium. The question is how much setup you need to get the GUI on the screen with the right data.
You can, but usually with lots of effort and cannot test UX and design requirements anyway, which is why I tend to make this question about full TDD based processes.
I saw an enjoyable talk recently about snapshot testing. I don't know too much about testing generally but it seems like it could be relevant: https://facebook.github.io/jest/docs/en/snapshot-testing.htm... is the general idea but it doesn't have to be confined to jest/react
I practise TDD for peace of mind - if I add new functionality to existing code I can be 99.9% sure I haven't made any regressions. When a client's system goes down on a friday, I can 99.9% guarantee it wasn't my code that is at fault. If I have to work at the weekend to update a production server, I'm 99.9% sure it'll go smoothly as my tests say it will.