> I’m not going to say that blindly filling in type holes always works, but I’d say maybe 95% of the time?
I must do a very different type of programming. In my world, I'd say this might work maybe 20% of the time, max.
[Edit: Others on this thread (lkitching and weevie, at least) say that the difference is that the types must be "sufficiently polymorphic". My types almost never are. I suspect that the difference is, the more polymorphic the types, the fewer details of what can be done with each type, and therefore the fewer the options on the ways they can be combined. For example, if my function takes two ints, I can add, subtract, multiply, divide, take the remainder, etc. I can't use this technique to guide that. But if I'm passed in two "a"s, I probably need to be passed in the function to use to combine them as well (they may not have a "+"). Then it's pretty clear - I'm passed in two "a"s, and a function that takes two "a"s and returns an a, and there's really only one way to combine those parts.]
I must do a very different type of programming. In my world, I'd say this might work maybe 20% of the time, max.
[Edit: Others on this thread (lkitching and weevie, at least) say that the difference is that the types must be "sufficiently polymorphic". My types almost never are. I suspect that the difference is, the more polymorphic the types, the fewer details of what can be done with each type, and therefore the fewer the options on the ways they can be combined. For example, if my function takes two ints, I can add, subtract, multiply, divide, take the remainder, etc. I can't use this technique to guide that. But if I'm passed in two "a"s, I probably need to be passed in the function to use to combine them as well (they may not have a "+"). Then it's pretty clear - I'm passed in two "a"s, and a function that takes two "a"s and returns an a, and there's really only one way to combine those parts.]