Real-world implementations of mathematics always have limitations and the question is when to model them mathematically. In particular, do you model them in the type system and make all your function types that much more complicated? It makes most functions incompatible without glue code.
The most obvious one is integer overflow where a common solution is "don't model it and hope it doesn't happen." Switching to bignums moves the issue to performance where large inputs can cause the program to hang or run out of memory.
Performance and memory usage are never modelled in the type system, which implies that functions with vastly different performance are interchangeable. But we rely on this to be able to improve performance; it's actually better unspecified.
Cracking down on side effects means that in some languages you need to change function signatures to add a log statement, or you can't easily swap in a remote procedure call for a local one. Checked exceptions in Java are an example of how fine-grained tracking of effects helps you paint yourself in a corner where there is rampant cheating out of necessity.
I sometimes wonder if maybe languages like Rust should have every function implicitly return a Result, just so you can report an error later if you need to. Everything is partial, or if not, maybe someday it will be?
But there are languages where any field access can be replaced with an arbitrary function call and that seems bad, so it's probably impractical.
> Performance and memory usage are never modelled in the type system
There's no need tho? If you have types they do the same thing but with different performance guarantees, you can use an interface + sub classes to create your own performance aware types. Same with memory.
> Cracking down on side effects means that in some languages you need to change function signatures to add a log statement
That's an implementation shortcoming. Java isn't forcing you to do this.
> Checked exceptions in Java are an example of how fine-grained tracking of effects helps you paint yourself in a corner where there is rampant cheating out of necessity.
At some level, you have to limit your set or exceptions or error codes. Exceptions from arbitrary code that you otherwise won't be catchable. I think return status/error is much better - that forces all methods in the call stack to think more about the errors that underlying methods with throw and what they will bubble up/handle. It's more work tho.
We use type systems to find bugs when dependencies are upgraded, and performance not being part of the contract between modules means that performance degradation won’t be caught by the type system.
You can deal with it in other ways like running benchmarks often, but I hope you can see why it might be nice in principle if such implicit contracts were made explicit and checked by the compiler? For example, it might make hard realtime systems easier to build and upgrade if there were some formal way of reasoning about deadlines. I haven’t heard of it ever being made practical, though.
Also, I was thinking of Haskell rather than Java as a language where adding logging would change a function’s type signature.
There’s a basic difference between functions that can fail (returning an error) and those that are considered to always succeed (so no error) which I think is well-captured by a type system, whether you do it like Go or Rust or functional languages. If there is any kind of error then you can catch it without distinguishing between them. The same could be done using exceptions too.
The problem with Java is that they tried to distinguish further by what kind of error can be thrown, and this sort of turned into a bad effect system. There are methods that throw IOException and can do I/O and others that throw SQLException so they can talk to a SQL database, and, uh, don’t those categories overlap? And isn’t this exposing implementation details? Why should the caller care whether you use a SQL database or not?
I suspect more principled effect systems might end up having similar problems with functions being inflexibly incompatible due to having different effects, as described in What Color Is Your Function for the sync versus async distinction.
The most obvious one is integer overflow where a common solution is "don't model it and hope it doesn't happen." Switching to bignums moves the issue to performance where large inputs can cause the program to hang or run out of memory.
Performance and memory usage are never modelled in the type system, which implies that functions with vastly different performance are interchangeable. But we rely on this to be able to improve performance; it's actually better unspecified.
Cracking down on side effects means that in some languages you need to change function signatures to add a log statement, or you can't easily swap in a remote procedure call for a local one. Checked exceptions in Java are an example of how fine-grained tracking of effects helps you paint yourself in a corner where there is rampant cheating out of necessity.
I sometimes wonder if maybe languages like Rust should have every function implicitly return a Result, just so you can report an error later if you need to. Everything is partial, or if not, maybe someday it will be?
But there are languages where any field access can be replaced with an arbitrary function call and that seems bad, so it's probably impractical.