Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we must window our understandings

I'm not arguing with that. I'm only saying that we shouldn't assume "the types are enough", so long as we have more "cognitive window" available. Here's why:

  add     :: Integer -> Integer -> Integer
  add x y =  x + y

  add     :: Integer -> Integer -> Integer
  add x y =  x - y
The type system doesn't have any idea which of these is correct. So if the programmer is "asleep at the wheel", it's still entirely possible for their logic to come out wrong.

All I'm saying is we shouldn't needlessly ignore the actual logic in favor of "letting the compiler write it for us". That's what the OP is expressly advocating for.



I am really torn on the subject. I write physics simulations for research use and will therefore assume a FDTD code that simulates Maxwells equations as an example.

On one hand there is a bunch of minus signs in Maxwells equations and there is nothing except historical nomenclature and experiments that can tell you where they go. Nothing in the type system can know if dB/dt is curl(E) or -curl(E). (Note that dE/dt has the opposite sign.) And if you implement say a finite difference stencil you have a bunch of numerical coefficients like (1/24, -27/24, 27/24, -1/24) that are not straightforward to derive. If you go to NSFD these coefficients will actually come out of the numerical optimization run by another code. The compiler will never be able to type check that.

On the other hand it can be quite useful to have the compiler type check variable for correct SI units, so you don't accidentally add curl(E) to B without multiplying by the time step dt. But that is already quite possible with the type system in C++ that is derived by purists as bad and weak. Maybe it could even add checks that stuff in the new time step has to have a change proportional to dt added to it and prevent me from accessing stuff that is not properly time centered. But then again I have not seen anybody write code like that in Haskell that did anything useful.

Having guardrails is nice, but you can not write the entire program in types. And if you try all you get is the same complexity in a unreadable syntax.


I want to say that the C++ type system is not really weak — it can enforce basically anything, since templates are Turing complete. But most of those possible types are horrifically unergonomic.


That's not what OP is arguing for. Here is the relevant portion that might clear your misconception about this article.

>

I’m not going to say that blindly filling in type holes always works, but I’d say maybe 95% of the time? It’s truly amazing just how far you can get by writing down the right type and making sure you use every variable.

The reason why this works is known as theorems for free, which roughly states that we can infer lots of facts about a type signature (assuming it’s correct.) One of those facts we can infer is often the the only possible implementation. It’s cool as fuck, but you don’t need to understand the paper to use this idea in practice.

One question you might have is “what the heck does it mean for a type to be correct?” Good question! It means your type should be as polymorphic as possible.


It's because `Integer` is an insufficiently specific type to ensure that the value expression is well formed. If you really wanted to, you could prove that `add` is well-formed in Haskell and Haskell-likes: https://github.com/idris-lang/Idris-dev/blob/master/libs/pre...


The author is advocating this method when the functions being derived are sufficiently polymorphic which your add function is not.


Indeed, this is a perfect example of why it only works when your types are sufficiently polymorphic, and how Haskells type system can't always capture all the required logic needed.


For my "standard" language response: In general, I don't actually like using mathematics for this sort of discussion, because on the one hand it all seems so reasonable, and yet, on the other hand, almost nothing we do involves working in spaces as complicated as numbers. You see something similar in "operator overloading"... makes great sense for numbers, but everywhere else it turns into a pain, because we don't really work in numerical spaces.

The vast majority of the time, we're really working with types like

    User -> Permissions -> PermissionType -> Maybe BillingHistory
and a type-based refactoring of "Permissions -> PermissionType" into "PermissionsWithType" isn't going to go wrong, even in looser languages. (Those aren't very Haskell-y types, but I'll keep the notation here for now.) As long as you can assume your code isn't entirely insane, and the permissions code isn't going to spontaneously guess permissions into existence when you go to check something with them, you're going to be generally OK with this sort of type-directed refactoring.

To the extent that you can't actually depend on your code to not do things like magic permissions into existence... well... this is precisely why that's always a bad idea. Don't write code that does that.

For my Haskell response: In Haskell, when you know how to read the type system, you don't have to use a heuristic to guess whether you should slow down.. the type itself is telling you that you need to be more careful. As a reasonably skilled Haskell programmer using type-directed reasoning, you can immediately read off from "Int -> Int -> Int" that you have a more complicated function than "a -> a -> a". The latter is so "simple" there's only two reasonable implementations, whereas that's only the beginning of what your Int function may do. However, when you have "Applicative f => f (a -> b) -> f a -> f b", you actually can know what that is not doing. The polymorphism tells you you don't have to sit there and study it. It can't magic an "a" into existence, or suddenly decide to use "division" on the resulting "b", because it can't. It doesn't know how. You don't need to worry about it doing those things anymore, because it can't.

For my harmonization of the two: In principle, imperative code can almost always magic values into existence, use globals, etc. One of the reasons I can stand writing in looser langauges like Go is that I just don't do those things. In general, when I write code, if you've got a "Permission" and "PermissionCheckable", I do my best to keep it so there's only one sensible way to combine the two. So, when I'm refactoring my code, even in looser strictly-typed languages, I can still pretty much get away with the type-directed approach. (Backstopped by the unit tests I have anyhow.) It is certainly true that you can write, or be forced to deal with, code that is conceptually mismashed where this is more dangerous, but I'd suggest as you get experienced over the years that you learn to stop doing that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: