Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am slowly trying to understand dependent types but the explanation is a bit confusing to me as, I understand the mathematical terminology of a function that may return a type, but... Since function types take a value and return a value, they are by definition in another universe from say morphisms that would take a type and return a type.

The same way, I see a value as a ur-element and types as sets of values. So even if there is syntactic sugar around the value <-> type equivalence, I'd naively think that we could instead define some type morphism and that might be more accurate. The value parameter would merely be declared through a type parameter constrained to be a singleton. The same way a ur-element is not a set but a member of set.

Then the question is representation but that could be left as an optimization. Perhaps that it is already what is done.

Example:

type Nine int = {9} And then the rest is generic functions, parameterizable by 9, or actually, Nine.

So nothing too different from a refinement of int.

Basically, 'value' would be a special constraint on a type parameter in normal parametric polymorphism implementations. There would probably be derived constraint information such as size etc...

But I guess, the same issue of "which refinement types can be defined, while keeping everything decidable" remains as an issue.

Also how to handle runtime values? That will require type assertions, just like union types? Or is it only a compile time concept and there is no runtime instantiations. Only some kind of const generics?

A typeof function could be an example of dependent type though? Even at runtime?

Just wondering...



In the style of the linked post, you'd probably define a generic type (well, one of two generic types):

type ExactlyStatic : (0 t: Type) -> (0 v: t) -> Type

type ExactlyRuntime : (0 t: Type) -> (v: t) -> Type

Then you could have the type (ExactlyStatic Int 9) or the type (ExactlyRuntime Int 9).

The difference is that ExactlyStatic Int 9 doesn't expose the value 9 to the runtime, so it would be fully erased, while (ExactlyRuntime Int 9) does.

This means that the runtime representation of the first would be (), and the second would be Int.

> Also how to handle runtime values? That will require type assertions, just like union types?

The compiler doesn't insert any kind of runtime checks that you aren't writing in your code. The difference is that now when you write e.g. `if condition(x) then ABC else DEF` inside of the two expressions, you can obtain a proof that condition(x) is true/false, and propagate this.

Value representations will typically be algebraic-data-type flavored (so, often a tagged union) but you can use erasure to get more efficient representations if needed.


In type theory, all singleton types are isomorphic and have no useful distinguishing characteristics (indeed, this is true of all types of the same cardinality – and even then, comparing cardinalities is always undecidable and thus irrelevant at runtime). So your Nine type doesn’t really make sense, because you may as well just write Unit. In general, there is no amount of introspection into the “internal structure” of a type offered; even though parametricity does not hold in general (unless you postulate anticlassical axioms), all your functions that can run at runtime are required to be parametric.


Being isomorphic is not the same as being identical, or substitutable for one another. Type theory generally distinguishes between isomorphism and definitional equality and only the latter allows literal substitution. So a Nine type with a single constructor is indeed isomorphic to Unit but it's not the same type, it carries different syntactic and semantic meaning and the type system preserves that.

Some other false claims are that type theory does not distinguish types of the same cardinality. Type theory is usually intensional not extensional so two types with the same number of inhabitants can have wildly different structures and this structure can be used in pattern matching and type inference. Cardinality is a set-theoretic notion but most type theories are constructive and syntactic, not purely set-theoretic.

Also parametricity is a property of polymorphic functions, not of all functions in general. It's true that polymorphic code can't depend on the specific structure of its type argument but most type theories don't enforce full parametricity at runtime. Many practical type systems (like Haskell with type classes) break it with ad-hoc polymorphism or runtime types.


This comment contains a lot of false information. I’m first going to point out that there is a model of Lean’s type theory called the cardinality model, in which all types of equal cardinality are modelled as the same set. This is why I say the types have no useful distinguishing characteristics: it is consistent to add the axiom `Nine = Unit` to the type theory. For the same reason, it is consistent to add `ℕ = ℤ` as an axiom.

> So a Nine type with a single constructor is indeed isomorphic to Unit but it's not the same type, it carries different syntactic and semantic meaning and the type system preserves that.

It carries different syntax but the semantics are the exact same.

> Type theory is usually intensional not extensional so two types with the same number of inhabitants can have wildly different structures

It is true that type theory is usually intensional. It is also true that two types equal in cardinality can be constructed in multiple different ways, but this has nothing to do with intensionality verses extensionality – I wouldn’t even know how to explain why because it is just a category error – and furthermore just because they are constructed differently does not mean the types are actually different (because of the cardinality model).

> Cardinality is a set-theoretic notion but most type theories are constructive and syntactic, not purely set-theoretic.

I don’t know what you mean by this. Set theory can be constructive just as well as type theory can, and cardinality is a fully constructive notion. Set theory doesn’t have syntax per se but that’s just because the syntax is part of logic, which set theory is built on.

> most type theories don't enforce full parametricity at runtime

What is “most”? As far as I know Lean does, Coq does, and Agda does. So what else is there? Haskell isn’t a dependent type theory, so it’s irrelevant here.

---

Geniune question: Where are you sourcing your information from about type theory? Is it coming from an LLM or similar? Because I have not seen this much confusion and word salad condensed into a single comment before.


I will let Maxatar responds if he wants to but I will note that his response makes much more sense to me than yours as someone who uses traditional programming language and used to do a little math a couple decades ago.

With yours, it seems that we could even equate string to int.

How can a subtype of the integer type, defined extensionally, be equal to Unit? That really doesn't make any sense to me.

> it is consistent to add `ℕ = ℤ` as an axiom Do you have a link to this? I am unaware of this. Not saying you're wrong but I would like to explore this. Seems very strange to me.

As he explained, an isomorphism does not mean equality for all I know. cf. Cantor. How would anything typecheck otherwise? In denotational semantics, this is not how it works. You could look into semantic subtyping with set theoretic types for instance.

type Nine int = {9} defines a subtype of int called Nine that refers to the value 9. All variables declared of that type are initialized as containing int(9). They are equal to Nine. If you erased everything and replaced by Unit {} it would not work. This is a whole other type/value.

How would one be able to implement runtime reflection then?

I do understand that his response to you was a bit abrupt. Not sure he was factually wrong about the technical side though. Your knee-jerk response makes sense even if it is too bad it risks making the conversation less interesting.

Usually types are defined intensionally, e.g. by name. Not by listing a set of members (extensional encoding) in their set-theoretic semantic. So maybe you have not encountered such treatment in the languages you are used to?

edit: the only way I can understand your answer is if you are only considering the Universe of types as a standalone from the Universe of values. In that universe, we only deal with types and types structured as composites in what you are familiar of perhaps? Maybe then it is just that this Universe as formalized is insufficiently constrained/ underspecified/ over-abstracted?

It shouldn't be too difficult to define specific extensional types on top, of which singleton types would not have their extensional definitions erased.


> Many practical type systems (like Haskell with type classes) break it with ad-hoc polymorphism or runtime types.

Haskell does not break parametricity. Any presence of ad-hoc polymorphism (via type classes) or runtime types (via something like Typeable, itself a type class) is reflected in the type signature and thus completely preserves parametricity.


I think what they meant is that it is not purely parametric polymorphism. Not that parametricity is broken.


Hmm, maybe

> most type theories don't enforce full parametricity at runtime

means "sometimes types can appear at run time"? If so it's true, but it's not what I understand by "parametricity".


Not sure, either : Parametric polymorphism is compile time. Adding runtime behaviors is a sort of escape hatch.

Which is fair although necessary. It's not so much a breakage as it is a necessity, since non parametric code may rely on this too.

Or maybe it is about constrained parametricity. In any case this doesn't seem a big issue.


How does it become Unit if it is an integer of value 9? Why would cardinalities be undecidable if they are encoded discretely in the type?

For instance, type Nine int = {9} would not be represented as 9. It would probably be a fat pointer. It is not just an int, it would not even have the same operations (9 + 9 is 18 which is an int but is not of type Nine, but that's fine, a fat pointer does not need to share the same set of operations as the value it wraps).

It could be seen as a refinement of int? I am not sure that it can truly be isomorphic? My suspicion was that it would only be somewhat isomorphic at compile time, for type checking, and if there is a mechanism for auto unwrapping of the value?


There's only one possible value of type Nine; there's only one possible value of type Unit. They're isomorphic: there's a pair of functions to convert from Nine to Unit and from Unit to Nine whose compositions are identities. Both functions are just constants that discard their argument un-inspected. "nineToUnit _ = unit" and "unitToNine _ = {9}".

You've made up your language and syntax for "type Nine int = {9}" so the rules of how it works are up to you. You're sort of talking about it like it's a refinement type, which is a type plus a predicate over values of the type. Refinement types aren't quite dependent types: they're sort of like a dependent pair where the second term is of kind Proposition, not Type; your type in Liquid Haskell would look something like 'type Nine = Int<\n -> n == 9>'.

Your type Nine carries no information, so the most reasonable runtime representation is no representation at all. Any use of the Nine boils down to pattern matching on it, and a pattern match on a Nine only has one possible branch, so you can ignore the Nine term altogether.


Why doesn't it seem exactly true?

It is an integer. It is in the definition. And any value should be equal to nine. By construction Nine could have been given the same representation as int, at first. Except this is not enough to express the refinement/proposition.

One could represent it as a fat pointer with a space allocated to the set of propositions/predicates to check the value of.

That allows to check for equality for instance.

That information would not be discarded.

This is basically a subtype of int.

As such, it is both a dependent type and a refinement type. While it is true that not all refinement types are dependent types, because of cardinality.

I think Maxatar response in the same thread puts it in words that are possibly closer to the art.


When does the predicate get tested?

Also, it's not a dependent type. The type does not depend on the value. The value depends on the type.


The predicate gets tested every time we do type checking? It is part of the type identity. And it is a dependent type. Just like an array type is a dependent type, the actual array type depending on the length value argument.

edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.

And only then we can deal with only types i.e. everything from the same Universe.


> The predicate gets tested every time we do type checking? It is part of the type identity.

When does type checking happen?

I think it happens at compile time, which means the predicate is not used for anything at all at run time.

> edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.

I am not sure what you mean by "to unabstract them so that the value is part of their identity", sorry. Could you please explain it for me?

> And only then we can deal with only types i.e. everything from the same Universe.

If you mean avoiding the hierarchy of universes that many dependently typed languages have, the reason they have those is that treating all types as being in the same universe leads to a paradox ("the type of all types" contains itself and that gets weird - not impossible, just weird).


> the predicate is not used for anything at all at run time.

It is used for its value when declaring a new variable of a given type at runtime too. It has to be treated as a special predicate. The allocator needs to be aware of this. Runtume introspection needs to be aware of this. Parametric type instantiation also needs to know of this since it is used to create dependent types.

The point is that in the universe of types that seems to be built in dependent types, singleton Types are just Types decorrelated from their set of values. So they become indistinguishable from each other, besides their name. Or so it seems that the explanation is. It is abstracted from their value. The proposal was to keep the set definition attached. What I call unabstract them.

The point is exactly to avoid mixing up universes which can lead to paradoxes. Instead of dealing with types as some sorts of values with functions over types, mixed up with functions over values which are also types and then functions of types and values to make some sort of dependent types, we keep the algebra of types about types. We just bridge values into singleton types. Also, it should allow an implementation that relies mostly on normal constrained parametricity and those singleton types. The point is that mixing values and types (as values) would exactly lead to this type of all types issue.

But again, I am not familiar enough with the dependent type implementations to know exactly what treatment they have of the issue.


Hi aatd86! We had a different thread about existence a couple days ago, and I'm just curious -- is English your second language? What's your first language? I'm guessing French, or something from Central Europe.

Thanks for humoring me!


Functions that return types are indeed at a higher level than those that don't.

Values can be seen as Singleton types. The key difference is the universe they live in. In the universe of types, the level one level below is perceived as a value. Similarly, in the universe of kinds, types appear to be values.

> Basically, 'value' would be a special constraint on a type parameter in normal parametric polymorphism implementations

Yes this is a level constraint.

> But I guess, the same issue of "which refinement types can be defined, while keeping everything decidable" remains as an issue.

If you're dealing with fully computable types. Nothing is decidable.

> Also how to handle runtime values? That will require type assertions, just like union types? Or is it only a compile time concept and there is no runtime instantiations. Only some kind of const generics?

A compiler with dependent types is essentially producing programs that are itself embedded with its input. There cannot be a distinction between runtime and compel time. Because in general type checking requires you to be able to run a program. The compilation essentially becomes deciding which parts you want to evaluate and which parts you want to defer until later.

> A typeof function could be an example of dependent type though? Even at runtime?

Typeof is just const.

Typeof: (T : type) -> (x:T) -> Type

Final note: I've written some compilers for toy dependently typed languages. By far dependent typing makes both the language and the compiler easier not harder. This is because Haskell and c++ and other languages with type systems and metaprogramming or generics actually have several languages: the value language that we are familiar with, but also the type language.

In Haskell, this is the class/instance language which is another logic programming language atop the lambda calculus language. This means to write a Haskell compiler you have to write a compiler for Haskell and the logic programming language (which is turing complete btw) that is the type language

Similarly in c++, you have to implement c++ AND the template system which is a functional programming language with incredibly confusing semantics.

On the other hand for a dependently typed language, you just write one language... The types are talked about in the same way as values. The only difficulty is type inference. However, an explicit dependently typed language is actually the easiest typed compiler to write because it's literally just checking for term equality which is very easy! On the other hand with a dependent language, type inference is never going to be perfect so you're a lot freer to make your own heuristic decisions and forego HM-like perfection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: