> The trap is that both OOP hierarchies and FP "make illegal states unrepresentable" create premature crystallization of domain understanding into rigid technical models. When domains evolve (and they always do), this coupling demands expensive refactoring.
At least when you refactor your types, the compiler is going to pinpoint every line of code where you now have missing pattern checks, unhandled nulls, not enough parameters, type mismatches etc.
I find refactoring in languages like Python/JavaScript/PHP terrifying because of the lack of this and it makes me much less likely to refactor.
Even with a test suite (which you should have even when using types), it's not going to exhaustively catch problems the type system could catch (maybe you can trudge through several null errors your tests triggered but there could be many more lurking), working backwards to figure out what caused each runtime test error is ad-hoc and draining (like tracing back where a variable value came and why it was unexpectedly null), and having to write + refactor extra tests to make up for the lack of types is a maintenance burden.
Also, most test suites I see do not contain type related tests like sending the wrong types to function parameters because it's so tedious and verbose to do this for every function and parameter, which is a massive test coverage hole. This is especially true for nested data structures that contain a mixture of types, arrays, and optional fields.
I feel like I'm never going to understand how some people are happy with a test suite and figuring out runtime errors over a magic tool that says "without even running any parameters through this function, this line can have an unhandled null error you should fix". How could you not want that and the peace of mind that comes with it?
> it's not going to exhaustively catch problems the type system could catch
Unless you are using a formal proof language, you're going to have that problem anyway. It's always humorous when you read comments like these and you find out they are using Rust or something similar with a half-assed type system.
I caveated you should have a test suite anyway (i.e. because types aren't going to catch everything), and the above was suppose to be a caveat to mean "for the behaviours the type system you have available can catch".
Obviously mainstream statically typed languages can't formally verify all complex app behaviour. My frustration is more aimed at having time and energy wasted from runtime and test suite errors that can be easily caught with a basic type system with minimal effort e.g. null checks, function parameters are correct type.
Formal proof languages are a long way from being practical for regular apps, and require massive effort for diminishing returns, so we have to be practical to plug some of this gap with test cases and good enough type systems.
> e.g. null checks, function parameters are correct type.
Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic. This isn't a real problem.
What you might be trying to suggest, though, is that half-assed type systems are easier to understand for average developers, so they are more likely to use them correctly and thus feel the benefit from that? It is true that in order to write good tests you need to share a formal proof-esq mindset, and thus they are nearly as burdensome to write as using a formal proof language. In practice, a lot of developers don't grasp that and end up writing tests that serve no purpose. That is a good point.
> Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic. This isn't a real problem.
I just don't find this in practice. For example, I've worked in multiple large Python projects with lots of test cases, and nobody is making the effort to check what happens when you pass incorrect types, badly formed input, and null values in different permutations to each function because it's too much effort and tedious. Most tests are happy path tests, a few error handling tests if you're lucky, for a few example values that are going to miss a lot of edges.
And let's be honest, it's common for parts of the code to have no tests at all because the deadline was too tight or it's deemed not important.
If you have a type system that lets you capture properties like "this parameter should not be null", why would you not leverage this? It's so easily in the sweet spot for me of minimal effort, high reward e.g. eliminates null errors, makes refactoring easier later, that I don't want to use languages that expect me to write test cases for this.
> half-assed type systems are easier to understand for average developers
Not sure why you call them that. Language designers are always trying find a sweet spot with their type systems, in terms of how hard it is to use and what payback you get. For example, once you try to capture even basic properties about e.g. the size/length of collections in the types, the burden on the dev gets unreasonable high very quickly (like requiring devs to write proofs). It's a choice to make them less powerful.
> Most tests are happy path tests, a few error handling tests if you're lucky, for a few example values that are going to miss a lot of edges. And let's be honest, it's common for parts of the code to have no tests at all...
This seems like a roundabout way of confirming that what you are actually saying is that half-assed type systems are much easier to grasp for average developers, and thus they find them to be beneficial because being able to grasp it means they are able to use it correctly. You are absolutely right that most tests that get written (if they get written!) in the real world are essentially useless. Good tests require a mindset much like formal proofs, which, like writing true formal proofs, is really hard. I did already agree that this was a good point.
> Not sure why you call them that.
Why not? It gets the idea across enough, while being sufficiently off brand that it gets those who aren't here for the right reasons panties in a knot. Look, you don't have to sell me on static typing, even where not complete. I understand the benefits and bask in those benefits in my own code. But they are also completely oversold by hyper-emotional people who can't discern between true technical merit and their arbitrary feelings. Using such a term reveals where one is coming from. Those interested in the technical merit couldn't care less about what you call it. If someone reacts to the term, you know they aren't here in good faith and everything they say can be ignored.
as someone who has actually written stuff in non-"half-assed type systems". It's really not about understanding. Even if you understand, it's a HUGE pain to write things in them. It can be worth it if you are extremely high assurance but in general it's just not worth it in most software.
Dynamic typing is on the other end of the spectrum. That is a huge pain precisely because there are no automated checks.
In between those two extremes there is an (subjective)sweet spot. Where you don't pay much at all in terms of overhead, but you get back a ton from the checks it provides.
> This seems like a roundabout way of confirming that what you are actually saying is that half-assed type systems are much easier to grasp for average developers
To clarify, I think formal verification languages are too advanced for almost everyone and overkill for almost every mainstream app. And type systems like we have in Rust, TypeScript and OCaml seem a reasonable effort/reward sweet spot for all levels of developer and most projects.
What's your ideal set up then? What type system complexity (or maybe language)? How extensive should the test suite be? What categories of errors should be left to the type system and which ones for the test suite?
> Once you've tested the complex things that (almost) no language has a type system able to express, you also have tested null checks, function parameter types, etc. by virtue of you needing to visit those situations in order to test the complex logic.
That's not true. At no point in testing `fn add(a: i32, b: i32) -> i32` am I going to call `add("a", "b")` or `add(2, None)`. Rust even won't permit me to try. In a language with a more permissive type system, I would have to add additional tests to check cases where parameters are null or of the wrong type.
> At no point in testing `fn add(a: i32, b: i32) -> i32` am I going to call `add("a", "b")` or `add(2, None)`.
It seems you either don't understand the topic of discussion or don't understand testing (see previous comment). If the user of your function calls it in undocumented ways, that's their problem, not yours. That is for their tests to reason with.
Passing the wrong types is only your problem for the functions you call. Continuing with your example, consider that you accidentally wrote (where the compiler doesn't apply type checking):
fn double(a: i32) -> i32 {
add(a, None) // Should have been add(a, a)
}
How do you think you are going to miss that in your tests, exactly?
which only gets parsed by another corner of the codebase at runtime.
// apply tax to tips only in some regions
if (taxableTips) {
paymentInfo.tip += applyRegionalTax(paymentInfo.value)
// ERROR: tip is undefined (instead of zero)
}
You can't validate everything all the time, and if you try, it's easy for that validation to fall out of sync with the actual demands of the underlying logic. Errors like this crop up easily while refactoring. That's why one of the touted benefits of Rust's type system is "fearless refactoring."
Real functions are tens of lines or more long, have complex inputs, multiple branches, and call other complex functions, so tests that try a few inputs and are only checking for a few behaviours aren't going to catch everything.
If it's practical to get a static type system to exhaustively check a property for you (like null checks), it's reckless in my opinion to rely on a test suite for that.
> If the user of your function calls it in undocumented ways, that's their problem, not yours.
Sounds reckless to me as well because you should assume functions have bugs and will also be passed bad inputs. If a bug makes a function return a bad output, and that gets passed to another function in a way that gives "undocumented" behaviour, I'd much prefer to code to fail or not compile at all, because when this gets missed in tests it'll eventually trigger on production.
I view it like the Swiss cheese model (mentioned elsewhere), where you try to catch bugs at a type checking layer, a test suite layer, code review, manual QA, runtime monitoring etc. and you should assume flaws at all layers. I see no good reason to skip the type checking layer.
If you need to test for null checks and function parameter types, then your dismissal of "half-assed" type systems is severely misplaced. Everyone [1] agrees that testing null checks is a huge waste of time.
Trouble is that if there are gaps then the types become redundant. Consider error handling. Rust helps ensure you handle the error, but it doesn't ensure you handle the error correctly. For that, you must write tests. But once you've written tests to prove that you've handled the error correctly, you've also proven that you handled the error, so you didn't really need the type to begin with. You're really no better off than someone using PHP or JavaScript.
> But once you've written tests to prove that you've handled the error correctly, you've also proven that you handled the error
Hardly. Suppose all caught errors in a particular module of code bubble up to a call site which (say) retries with exponential back-off. If the compiler can guarantee that I handle every error, I only need one test that checks whether the exponential back-off logic works. With no error handling guarantee, I'd need to test that every error case is correctly caught—otherwise my output might be corrupted.
The chief reason software fails is because programmers are insufficiently aware of all of the reasons the software can fail. Sure you need a test to make sure that you handle the error correctly, but if the function signature doesn't indicate the possibility of an error occurring, why would you write that test in the first place?
By the same reasoning, if the function signature doesn't indicate what specific errors can occur, why would you write a test in the first place?
No matter how you slice it you have to figure out what the software you are calling upon does and how it is intended to function. Which is, too, why you are writing tests: So that your users have documentation to learn that information from. That is what tests are for. That is what testing is all about! That it is also executable is merely to prove that what is documented is true.
Let us introduce you to the concept of checked exceptions. That is one of the few paradigms we've seen in actually-used languages (namely Java) where communicating which specific errors will occur has been tried.
Why is it that developer brains shut off as soon as they see the word "error"? It happens every time without fail.
I'm aware of checked exceptions in Java. What I'm not aware of is a language which has checked exceptions as the only exception mechanism, which would be the only way to have exceptions always reflected in the function definition.
You're right, but kind of missing the way risk works. Normally you want something like a swiss cheese model[0] where different layers reduce the likelihood of issues.
Snubbing type systems because they aren't 100% failproof misses that point.
At least when you refactor your types, the compiler is going to pinpoint every line of code where you now have missing pattern checks, unhandled nulls, not enough parameters, type mismatches etc.
I find refactoring in languages like Python/JavaScript/PHP terrifying because of the lack of this and it makes me much less likely to refactor.
Even with a test suite (which you should have even when using types), it's not going to exhaustively catch problems the type system could catch (maybe you can trudge through several null errors your tests triggered but there could be many more lurking), working backwards to figure out what caused each runtime test error is ad-hoc and draining (like tracing back where a variable value came and why it was unexpectedly null), and having to write + refactor extra tests to make up for the lack of types is a maintenance burden.
Also, most test suites I see do not contain type related tests like sending the wrong types to function parameters because it's so tedious and verbose to do this for every function and parameter, which is a massive test coverage hole. This is especially true for nested data structures that contain a mixture of types, arrays, and optional fields.
I feel like I'm never going to understand how some people are happy with a test suite and figuring out runtime errors over a magic tool that says "without even running any parameters through this function, this line can have an unhandled null error you should fix". How could you not want that and the peace of mind that comes with it?