I can see you disagree on a matter of principle and personal experience. And that's alright. I just think it's as valid an opinion as all the others. There are some who like explicitly defined types of various levels. Those who want it all infered or partially infered, and those who don't want them at all, or want them optional or gradual, etc.
And none has been able to make a claim over the others even now years later the debate still rage on. The data doesn't show up. And different people have different experience that leads them to different paths.
So I like to think of it more as a personal choice. Like various master chef all have their preferred brand and type of knife, so do developers with programming language and type checkers.
Any language that supports type inference will also support annotations if your taste/linter/boss demands it. That's fine. But hobbling the compiler by not including type inference at all, because you think it causes bugs, is quite definitely stupid. I literally don't understand how anyone could dispute this except by inexperience with type inference.
Based on the sentence with the tangent about "optional or gradual" etc, I suspect you are conflating two debates. Of course there is debate on to what degree static type systems are helpful. That is quite separate from the debating whether to include the very convenient and theoretically rock-solid feature of type inference. I have hardly heard anyone discuss this, much less attempt to gather empirical data.
Returning to the original point, there is no conflict between static typing and concision, except in C++/Java style type systems. I especially don't understand how a "static type checker enthusiast" would think there is, but if you meet one tell them Someone On The Internet told them to learn OCaml already.
Like I said, I'm definitely pro inference, in fact, I don't even mind a dynamic language (depending which one), give me a Lisp or Smalltalk and I don't even need any static type checking. Now, OCaml is actually next on my list, but I've done Rust, Haskell and Elm, used Kotlin, Scala, Fantom, and C#. And I've used core.typed on Clojure, for now, Clojure is my language of choice, but I definitely like a functional language with a fully infered type system compared to the imperative flavours.
But there is a segment of the static checking world which is dedicated to non Hindley-Milner, mostly imperative, mostly OO scene, and I feel denying Java's influence on static typing I think would be reductionary, as it is one of the most widely used statically typed language.
Anyways, I just wanted to bring their voice to the conversation, even if I don't agree, and am a big fan of local type inference that Java 10 added. Which maybe the fact that they finally added it shows they recon being wrong about it.
Now, for cause of error. One of the one they describe is "action at a distance" like this:
var result;
// many lines of code
result = new ArrayList<String>();
Now here because the assignment could be way later in the code, someone might mistake the type of the variable since it isn't obvious what it would be,.since the assignment is so far away from the declaration.
Another issue they talked about was multiple assignment. For example:
var x = "Hello"
// multiple lines later
x = new CustomerName();
Now what should the type of x be? Java could infer that it must be the closest common ancestor, which might be Object in this case. That's most likely not correct though, and the rest of the type checking now will probably lead to weird errors. Like why doesn't x work with String methods or CustomerName methods?
Another issue is with ABI compatibility. If the return type of a method is infered, and compiled. And a dependent class uses it. A programmer could change the implementation and have the inference infer a compatible but different type all of a sudden, like say it now infers ArrayList instead of the interface type List. Now the client code is broken, because it assumed the List type, and this is probably not the intent of the programmer, but a side effect of the inference having changed without their notice.
I think they talk about more stuff here. You got to dig into the mailing lists and JEPs and all to find most of it.
I'm not trying to deny Java's influence, just saying that it's bad. I do think it was worth bringing up (I upvoted your top-level comment), but mostly because we need to recognize the damage it has caused to the discussion around typing. People try to talk about "dynamic vs static", or worse run studies, when their only static language is Java. Their arguments are nonsense and their studies are wasted as far as I'm concerned, since they're dealing with a straw man. Java is treated as the poster child when it should be the black sheep. FFS, it still has null pointers.
Your examples are mostly not problems in reasonable languages. Just don't declare variables before initialization. Assigning a variable two incompatible types is an error, just like it would be without inference.
I'll grant the ABI one is interesting. In situations where separate compilation is on the table, maybe you want to annotate function types. But obviously the myriad of inferred languages don't suffer much from this. You could just as reasonably argue that the class author made a breaking change, and it should be treated exactly as if they changed the annotation.
What I wonder though is what about Java is bad? Like, is the lack of inference the key differentiator between good static type systems that help, and those that get in your way and slow you down?
Or are we actually claiming something more, that the paradigm of Java prevents its type system from being useful. Thus we're as much having a conversation about type systems as we are about functional vs imperative/oo programming and other such paradigms.
And in fact, if you think of it that way, maybe it isn't a type system you've been looking for after all, but a better language/paradigm. And you can see people moving away from Java to Erlang, Clojure, Elixir all have this "OMG this is so much better" feeling. Similarly, someone who moved from Java to Haskell, OCaml, and all also has this "OMG this is so much better feeling."
Where as one moves between Python/Ruby and Java, like the OP, and thinks, I don't see the big deal with static types?
Now someone could say, well, it's because you moved to a bad static type system. Try Haskell or OCaml instead. But that's not just Java with a different type system, now you've also fundamentally changed the paradigm and so much more.
It becomes difficult then to distinguish the type system from other aspects of the language.
Now, I'll show some bias here, but I've used Haskell and Clojure. The languages have a lot in common, one of the big difference between the two is the type system (ignoring purity). When moving between those, there's not a clear feeling of one being better, it's much more "hard to tell". The Haskell type system can be neat, can help me make sense of what is what in the code, what fields are available, what functions are supported, let me rename things with more confidence, etc. But it also does get in my way sometimes annoyingly, distracts me from my problem domain, and can feel limiting at times.
Amusingly, this is the same thing someone whose debating the pros/cons of Java's type system over Ruby or Python would say.
I guess where I'm going with this is too say that a type system is only a part of the story. And with Java, is it truly the type system that is the problem? I don't think so. In fact, I think maybe within the context of the semantics of Java, its type system is actually the best it can be.
By the way, I think the issue with Java's inference of mutable variable is that most of the time you want to infer the most generic type. Like say:
var items = new ArrayList<>();
It be nice for items to be a List and not an ArrayList, that's what a programmer would have annotated.
If a method returned items, you'd want the method to be a List, so that clients don't break if you later decide to use a different type of List.
Another challenge now though becomes what is the type of elements in the List?
You can go by the type of the first added value, but if that is behind conditional branches like:
var items = new ArrayList<>();
if (thing == 1)
items.add(100);
else
items.add("hello");
Now what?
I feel a lot of the mutable semantics of Java combined with the inheritance based subtyping makes type inference much harder, because of all these weird edge case.
> What I wonder though is what about Java is bad? Like, is the lack of inference the key differentiator between good static type systems that help, and those that get in your way and slow you down?
Lack of type inference is part of it - Java's type system gets in your way more than better type systems due to it. But the bigger part is that Java's type system doesn't give you the tools to model things properly. One big issue is nulls - no matter what you do with types, you have the possibility of null constantly in your way. The other is the lack of sum types, without which many things that you want to model become difficult, verbose, and error-prone.
IMO Java's crippling flaw is that it is very verbose (largely but not entirely due to lack of type inference), but its type system nevertheless permits a lot of errors (notably NPEs) that other languages easily catch. In a lot of ways, it gets the worst properties of both static and dynamic languages, both the inflexibility and the unsafety, all the weight and precious little of the power. It would be a lot more tolerable if it was null-safe with proper algebraic Option types.
So yeah, a smart programmer who moves from Java to either Haskell or Clojure will likely find it a breath of fresh air, for very different reasons. Incidentally, the reason I recommend OCaml is because it's a HM-inferred language but with a much more pragmatic type system than Haskell's. You can just println wherever you please.
It's true that subtyping makes inference a lot harder. My reflex is that subtyping is overrated and you should just use interfaces and composition, but you would have a decent claim that I'm moving the goalposts or something. I'm sure OCaml has a semi-decent solution, given the O part of the name, but I don't know off hand what it is. Worst case, if the compiler chokes on your subtype-heavy code I would say to just annotate it. Annotations are certainly valid for when type inference fails or for generally forcing it to do your will in weird cases. Relatedly, I don't totally hate forcing top-level function signatures to be explicit, which would also solve the most-general-interface thing in most cases.
AFAICT your last example is just ill-typed and should not compile. This is another of the places where Java's type system just fails entirely. What is the caller who gets that thing supposed to do with it, except reflect on the type of the contents like we're writing Python?
And none has been able to make a claim over the others even now years later the debate still rage on. The data doesn't show up. And different people have different experience that leads them to different paths.
So I like to think of it more as a personal choice. Like various master chef all have their preferred brand and type of knife, so do developers with programming language and type checkers.