Hacker Newsnew | past | comments | ask | show | jobs | submit | seanparsons's commentslogin

"This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur." It's starting to sound like a broken record at this point, languages are still seen as equal and as a result, interchangeable.


My longstanding prediction that Gatekeeper will ever so slowly tighten so that people don't realise like a frog boiled in water is continuing to be true.


People did realize when the actual Gatekeeper change happened a year ago [1]. But your prediction still holds because frogs do realize when they're boiled in water [2].

[1] https://arstechnica.com/gadgets/2024/08/macos-15-sequoia-mak..., https://www.macrumors.com/2024/08/06/macos-sequoia-gatekeepe..., https://daringfireball.net/linked/2024/08/07/mac-os-15-sequo.... Top HN comment on Sequoia's announcement mentions it: https://news.ycombinator.com/item?id=41559761

[2] https://en.wikipedia.org/wiki/Boiling_frog#Experiments_and_a...


The point is that by the time Gatekeeper closes tight enough that everything must run through Apple and it can't be disabled, most people wont notice and will be stuck with it.


Your assertion seems to imply that there will be a point of no return where users are no longer able to stop buying apple hardware to run the software they want, and that therefore people should do so now.

If that's not what you're saying then your point is effectively moot, because if indeed Apple's platform control gets too egregious for some individuals then those people will switch at that point so there's no point in panic-switching now just in case.

In other words, users will switch when what Apple is offering does not meet what those users require. Some users will literally never care because all the software they use is signed and gatekept and so on; some users have jumped ship already because they want to be able to change whatever they want whenever they want. If things continue to "slippery slope" then more people will hit their own tipping point but asserting that it's going to happen all at once and apply to everyone is nonsense.


> more people will hit their own tipping point but asserting that it's going to happen all at once and apply to everyone is nonsense.

the point of boiling the frog is to make sure it happens slowly, such that the alternative options can no longer compete and be an option.

computer manufacturers and hardware makers cannot be trusted to make their platform open, because it would be detrimental to their bottom line. So it must come from regulation - right to repair etc, are on the right path, but what must be done is prevention of platform lockdowns. An owner of the hardware must be able to override all locks from the manufacturer.


iPhone is already a dictator state.

We need an antitrust breakup of Apple. And Google.

These companies are rotten.


There is no reason to believe this is going to happen other than the hyper-cynical conspiracy theories.

It remains easy to disable Gatekeeper if you want. New MacBooks still allow you to install other OSes, even though that would be trivial to lock down with signed boot requirements.

So far, none of the frog in boiling water predictions have actually come true at all. It’s just people parroting the same conspiracy every time the word Gatekeeper comes up, just like we went through every time Secure Boot came up.


Fortunately, Linux laptops are getting better and better. I'm hopeful that by the time my M1 macBook Air gets slow enough to annoy me (maybe a year or two from now?), I'll be able to smoothly transition to Linux. I've already done it on the desktop!


> by the time my M1 macBook Air gets slow enough to annoy me (maybe a year or two from now?)

It should be good for at least 5 years from now, if not more.


Before macOS 26 I would have agreed with you. But after Tahoe my M1 MacBook Pro feels a lot slower.

Funny, there's even some regression in layer backed NSView rendering where the app I'm working on is faster (in some aspects) in a macOS 15 VM than on bare metal under macOS 26.


Are you running any electron apps that have not yet been updated to use the most recent upstream electron?

https://furbo.org/2025/10/06/tahoe-electron-detector/

I've got a couple things that I use which aren't yet up-to-date, and are blocking my upgrade.


Probably not if you bought the 8GB version :D


My 8GB M1 Air is still running as well today as the day I bought it.


And it will run at the same speed. But I would guess a lot of apps will requier more ram at some point.

OP said it should last for " at least 5 years from now, if not more." which I doubt. Maybe for light webbrowsing.


Finger crossed for mine as well!


Just did this. I am so much happier. As a lifelong Apple user, and side-quest Linux user the choice is a no-brainer nowadays. Desktop Linux is honestly great now. I love(d) Apple but Tahoe was the straw that broke the camel's back for me.

i use arch btw


My family have bought macs and been apple fanboys since the "Pizzabox" 6100 PowerPC. My dad handed me down a DuoDock when I was in middle school. We bought a G4 Cube, I had an iBook and Powerbook throughout college and throughout the 2010s.

In 2017 I built my first desktop PC from the ground up and got it running Windows/Linux. I just removed Windows after the 11 upgrade required TPM, and I bought a brand new Framework laptop which I love.

This is to say that Apple used to represent a sort of freedom to escape what used to be Microsoft's walled garden. Now it's just another dead-end closed ecosystem that I'm happy to leave behind.


> This is to say that Apple used to represent a sort of freedom to escape what used to be Microsoft's walled garden. Now it's just another dead-end closed ecosystem

So you haven’t had a Mac since 2017, but you believe all of us using Macs are stuck in some walled garden?

These comments are so weird. Gatekeeper can be turned off easily if that’s what you want. Most of us leave it on because it’s not actually a problem in practice. The homebrew change doesn’t even impact non-cask formulas.


It is said you only realise you are in jail once you feel the chains. And this is something Apple has tried to walk the line on, be locked down but in a fashion that causes the least push back on users.

Personally I never felt Mac OS was that locked down, but it has been over a decade since I last used it.

The only time I felt it was trying to delete 'Chess' only it to be listed as a vital system application. I know this isn't true but I would love it if Chess turned out to be a load bearing application for the entire OS. Like folks at Apple don't know why but if you remove it, everything stops.At least MS managed to remove the load bearing Space cadet pinball. Replaced it with a One drive popup that handles all memory management in the kernel ;)

Back to the original point, by comparison on iOS I definetly did feel the chains. One could fear Mac OS will turn into that but they haven't conditioned people yet.


I have to agree. Number of times it’s prevented me from running software I wanted to run: zero. Number of times it’s stopped me and said the equivalent of “are you really sure?”: a handful, maybe once a year on average.

And it’s not like I don’t use a gazillion third party apps and commands.


Same. I can see how it would look like a major problem if your only perspective was through clickbait headlines and angry comments from people who don’t use Macs anyway, though.

It reminds me of the distant cousin who lives out the countryside and prides themselves on not living in the city because the news tells them it’s a dangerous hellhole where everyone is getting mugged or shot on every street corner. When you immerse yourself in clickbait journalism the other side, whatever that may be, starts to look much worse than reality.


running VMs on apple chips has been rather difficult for me. other than that, yeah.


Apple does not support running other OS's on their hardware. This is bad in many senses but it is specially bad since it weakens competition and reduces incentives for Apple to improve their own OS, meaning it is bad even for their users in the long run.

If you choose to buy hardware from apple, you must consider that you're encouraging a behaviour that is bad for everyone, including yourself.


I'm not sure what you're talking about. Their bootloader explicitly supports other OSes. They make it easy to run Windows (even through a built-in app that helps you set it up). There are plenty of reasons to criticize Apple, but they literally don't do anything to prevent you from running another OS.


> Their bootloader explicitly supports other OSes

That’s true but that’s probably only so that it wouldn’t have been a subject when Apple Silicon Macs were released because Intel Macs weren’t locked.

In reality, the bootloader isn’t closed (yet) but the hardware is so much undocumented that it’s easy to understand that Apple doesn’t want anything else than their OS on your mac. The « alternative os » situation is actually worse than it used to be with Intel Macs and Apple is paying a lot of attention in never talking about this "feature".

IMO, they will just quietly remove this possibility on new generations when everyone will have forgotten that boot camp used to be a thing.


Eh, you may be right, but there's a big difference between "they are going to forbid other OSes by placing a software restriction where they explicitly permit things now" and "they already effectively forbid other OSes by not publishing developer documentation for proprietary hardware"--that's a tall order, and not a bar that many other hardware manufacturers meet either.

Like, could they lock down the bootloader? Sure. But that's effort they'd have to put in for minimal benefit at the moment. Opening up their hardware would be a lot more effort for questionable benefits (to Apple).


> they literally don't do anything to prevent you from running another OS.

Like not documenting their hardware? Like making Asahi Linux becoming a multi-year reverse engineering project that may possibly never achieve perfect compatibility?

> They make it easy to run Windows

On apple silicon without virtualisation? Sorry, didn't know that.


The point is that Apple could have easily locked down the bootloader and made it not possible at all to install something else. In designing the M1 hardware they explicitly went out of their way to make sure other operating systems could be installed and they’ve said as much. They took their smartphone SoCs and bootloader that never allowed alternate operating systems and added that feature in actively.

Technically Asahi Linux isn’t facing a much different situation than standard Linux distributions as they relate to x86 hardware. There are thousands of PC components that don’t provide any sort of Linux driver where contributors reverse engineer those drivers.

Sure, in the PC world a lot more vendors do voluntarily provide Linux drivers, and Apple will never to that for its hardware, and that specific point is a valid criticism.

As far as assisting in running Windows, my understanding is that the company that makes Parallels and Apple have some kind of relationship. Microsoft officially endorses Parallels.

You can complain about it being virtualization but it’s perfectly fine for desktop apps or even some more intensive apps. And it’s not really a very valid complaint considering that Microsoft doesn’t distribute a general purpose ARM distribution of Windows.


> Technically Asahi Linux isn’t facing a much different situation than standard Linux distributions as they relate to x86 hardware.

Very very different.

> There are thousands of PC components that don’t provide any sort of Linux driver where contributors reverse engineer those drivers.

Increasingly more rare. Maybe that only happens thèse d'ays on extremely specialized hardware.


It’s only rare these days because Linux spent decades clawing its way into data centers and workstations.

You can find a somewhat similar situation on Linux, with other non-Apple ARM hardware.


> Like not documenting their hardware?

They aren't actively hindering that reverse engineering effort. They aren't _helping_ either, but I didn't claim that they were helping. For as long as I can remember, Apple's stance with Mac computers has been "We sell the computers to you in the way we think is best. If you want to tinker, that's on you." and I don't think that has materially changed.


Apple Silicon cannot boot Windows ARM and Apple is dropping boot camp support alongside x86 support in the near future.


> Apple Silicon cannot boot Windows ARM

That's totally up to Microsoft… they could done a licensing deal with Apple years ago to enable Windows ARM to run natively on Apple Silicon hardware.


Why does this need a licensing deal? Windows didn't need a licensing deal to run on commodity PC hardware back in the day.


Because computers don't boot the way they used to in the commodity BIOS era. The boot loader has to cryptographically check that it's valid operating system it's attempting to boot.


Well, Apple could follow industry standards, too. The argument was Apple approves alternate operating systems as evidenced by boot camp. That's demonstrably not true anymore.


This. It’s technically possible (the same way Asahi uses), but Microsoft has to bring the support in Windows.


> Apple does not support running other OS's on their hardware.

The bootloader was intentionally left open to other OSes. You should look into Asahi Linux.


Neither does any other hardware vendor, even the likes of Dell, Lenovo and Asus clearly state on their online shops that their laptops work best with Windows, even when something like Ubuntu or Red-Hat is an option.

Also they hardly ship any updates.


Asahi Linux[1] is unbelievably great on Apple Silicon. It's honestly the best Linux install experience I've ever had.

1. https://asahilinux.org/


Yes, but only on M1 and maybe M2 devices. Doesn't work at all on M4.

Stability is an issue (as I tested it with M1 Pro throughout the years).

Not all of the hardware features are supported. For example no external monitors through the usb-c port.

Also the project seems somewhat dead, having some core developers leave the project.

I had high hopes for Asahi but currently it doesn't seem like it will ever be fully production ready for currently relevant hardware.


Unfortunately, while Asahi Linux runs fine on M1 and M2 with some missing capabilities, it doesn't run at all on M3, M4 or M5.

The M1 and M2 are still great laptops, so it's still a good experience if you're looking for a second-hand Linux laptop with Apple quality hardwre.


> Gatekeeper will ever so slowly tighten so that people don't realise like a frog boiled in water is continuing to be true

Gatekeeper can be disabled. Given Cupertino’s pivot to services and the Mac’s limited install base relative to iPhones (and high penetration among developers) I’m doubtful they’d remove that option in the foreseeable future.


It really bothers me that Apple removed any convenient shortcut to bypass Gatekeeper like the old Control-click [1] hotkey. Apple's relentless ratcheting of the difficulty/annoyance of Gatekeeper has just about pushed me over the edge to completely disable it, despite the risk.

The ridiculous song and dance of "File is dangerous, delete it?"->No->Settings->Security->Open Anyway->"File is dangerous, delete it?"->No is getting ridiculously old after literally doing it a hundred times at this point. And soon enough Apple will inevitably come up with some additional hurdle like, idk, closing Settings three times in a row while reading a fingerprint during an odd numbered minute.

So in the name of "increased security" they've needlessly turned it into a binary thing where it's completely unprotected or accept my own computer that I paid for will deliberately waste my time constantly. It makes Windows 11 seem elegant in comparison where all I need to do is run Win11Debloat once on install and it gets out of my way.

[1] https://developer.apple.com/news/?id=saqachfa


Open Automator and make a droplet or service that runs `xattr -d com.apple.quarantine` on whatever file you give it. There’s a recursive option for xattr that I can’t remember but I add that one on too; I’ve unzipped stuff that had the flag and somehow ended up with hundreds of files I couldn’t open without GK prompts.


  xattr -cr <file or dir>
Clears all attributes recursively.


Thanks! I'll give that a try.


> in the name of "increased security" they've needlessly turned it into a binary thing where it's completely unprotected

Why isn't a binary condition valid? Isn't that the ethos inherent to a literal walled garden?

If you're inside, trust us. If you're outside, you don't, but don't expect us to bail you out.


I didn’t say it was invalid, just that it was needless. When I bought the laptop Gatekeeper was a tolerable nuisance and I was fine with the tradeoff given the security benefits.

The removal of the hotkey (which also required changing a setting before it worked at all) didn’t actually make it harder for a regular user to access, just 5x as aggravating every time it's necessary.

If they made developers go through some long and tedious process to re-enable it I would grumble but understand, but the only solution to get back to the 2024 status quo being entirely disabling a critical security feature certainly doesn't benefit me in any way.


> The ridiculous song and dance of "File is dangerous, delete it?"->No->Settings->Security->Open Anyway->"File is dangerous, delete it?"->No is getting ridiculously old after literally doing it a hundred times at this point. And soon enough Apple will inevitably come up with some additional hurdle like, idk, closing Settings three times in a row while reading a fingerprint during an odd numbered minute.

> So in the name of "increased security" they've needlessly turned it into a binary thing where it's completely unprotected or accept my own computer that I paid for will deliberately waste my time constantly.

Remember when Apple made fun of Microsoft for doing exactly this? https://www.youtube.com/watch?v=8CwoluNRSSc


Gatekeeper isn’t changing. Homebrew’s policies are changing.

It also only applies to casks. If you don’t use homebrew casks, nothing is changing for you.

You can also disable Gatekeeper entirely. It’s very easy.

I don’t see what you think you’re predicting, unless you’re trying to imply that that Gatekeeper is a conspiratorial plot to turn your Mac into an iPhone. I predict we’re going to be seeing those conspiracy theories for decades while it never comes true. Apple doesn’t want to destroy the market for their $5000 laptops so they can sell us a $1000 iPad as our only computing device or send customers to competitors. This is like a replay of the sky is falling drama when secure boot was announced


The writing was on the wall from the first implementation. But we all kept getting downvoted when pointing out the road ahead.


Shut up and buy the sock.


I hate that analogy—frogs jump out.


I thought the problem with the analogy was that they died instantly?


Here's me hoping this was something for Factorio: Space Age...


Hahaha my exact thought.


As a static typing advocate I do find it funny how all the popular dynamic languages have slowly become statically typed. After decades of people saying it's not at all necessary and being so critical of statically typed languages.

When I was working on a fairly large TypeScript project it became the norm for dependencies to have type definitions in a relatively short space of time.


People adapt to the circumstances. A lot of Python uses are no longer about fast iteration on the REPL. Instead of that we are shipping Python to execute in clusters on very long running jobs or inside servers. It's not only about having to start all over after hours, it's simply that concurrent and distributed execution environments are hostile to interactive programming. Now you can't afford to wait for an exception and launch the debugger in postmortem. Or even if you do it's not very useful.

And now my personal opinion: If we are going the static typing way I would prefer simply to use Scala or similar instead of Python with types. Unfortunately in the same way that high performance languages like C attracts premature optimizers static types attract premature "abstracters" (C++ both). I also think that dynamic languages have the largest libraries for technical merit reasons. Being more "fluid" make them easier to mix. In the long term the ecosystem converges organically on certain interfaces between libraries.

And so here we are with the half baked approach of gradual typing and #type: ignore everywhere.


Here we are because:

* Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

* Types are incredibly valuable on hardened production code.

* Most good production code started out spikey, experimental or as an MVP and transitioned.

And so here we are with gradual typing because "throwing away all the code and rewriting it to be "perfect" in another language" has been known for years to be a shitty way to build products.

Im mystified that more people here dont see that the value and cost of types is NOT binary ("they're good! theyre bad!") but exists on a continuum that is contingent on the status of the app and sometimes even the individual feature.


> Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

I find I’ve spent so much time writing with typed code that I now find it harder to write POC code in dynamic languages because I use types to help reason about how I want to architect something.

Eg “this function should calculate x and return”, well if you already know what you want the function to do then you know what types you want. And if you don’t know what types you want then you haven’t actually decided what that function should do ahead of building it.

Now you might say “the point of experimental code is to figure out what you want functions to do”. But even if you’re writing an MVP, you should know what that each function should do by the time you’ve finished writing it. Because if you don’t know who to build a function then how do you even know that the runtime will execute it correctly?


Python doesn’t have “no types,” in fact it is strict about types. You just don’t have to waste time reading and writing them early on.

While a boon during prototyping, a project may need more structural support as the design solidifies, it grows, or a varied, growing team takes responsibility.

At some point those factors dominate, to the extent “may need” support approaches “must have.”


My point is if you don’t know what types you need, then you can’t be trusted to write the function to begin with. So you don’t actually save that much time in the end. typing out type names simply isn’t the time consuming part of prototyping.

But when it comes to refactoring, having type safety makes it very easy to use static analysis (typically the compiler) check for type-related bugs during that refactor.

I’ve spent a fair amount of years in a great many different PL paradigms and I’ve honestly never found loosely typed languages any fast for prototyping.

That all said, I will say that a lot of this also comes down to what you’re used to. If you’re used to thinking about data structures then your mind will go straight there when prototyping. If you’re not used to strictly typed languages, then you’ll find it a distraction.


Right after hello world you need a list of arguments or a dictionary of numbers to names. Types.

Writing map = {}, is a few times faster than map: Dictionary[int, str] = {}. Now multiply by ten instances. Oh wait, I’m going to change that to a tuple of pairs instead.

It takes me about three times longer to write equivalent Rust than Python, and sometimes it’s worth it.


Rust is slower to prototype than Python because Rust is a low level language. Not because it’s strictly typed. So that’s not really a fair comparison. For example, assembly doesn’t have any types at all and yet is slower to prototype than Rust.

Let’s take Visual Basic 6, for example. That was very quick to prototype in even with “option explicit” (basically forcing type declarations) defined. Quicker, even, than Python.

Typescript isn’t any slower to prototype in than vanilla JavaScript (bar setting up the build pipeline — man does JavaScript ecosystem really suck at DevEx!).

Writing map = {} only saves you a few keystrokes. And Unless you’re typing really slowly with one finger like an 80 year old using a keyboard for the first time, you’ll find the real input bottleneck isn’t how quickly you can type your data structures into code, but how quickly your brain can turn a product spec / Jira ticket into a mental abstraction.

> Oh wait, I’m going to change that to a tuple of pairs instead

And that’s exactly when you want the static analysis of a strict type system to jump in and say “hang on mate, you’ve forgotten to change these references too” ;)

Having worked on various code bases across a variety of different languages, the refactors that always scare me the most isn’t the large code bases, it’s the ones in Python or JavaScript because I don’t have a robust type system providing me with compile-time safety.

There’s an old adage that goes something like this: “don’t put off to runtime what can be done in compile time.”

As computers have gotten exponentially faster, we’ve seemed to have forgotten this rule. And to our own detriment.


Rust has many high-level constructs available as well as libraries ready and available if you stick to "python-like" things. Saving a "few keystrokes" is not what I described, it was specific: `: Dictionary[int, str]`, this is hard to remember, write, and read, and there's lots of punctuation. Many defs are even harder to compose.

Cementing that in early on is a big pre-optimization (ie waste) when it has a large likelyhood of being deleted. Refactors are not large at this point, and changes trivial to fix.


I've found the transition point where types are useful to start even within a few hundred lines of code, and I've found types are not that restrictive if at all, especially if the language started out typed. The rare case I need to discard types that is available usually, and a code smell your doing something wrong.

Even within a recent toy 1h python interview question having types would've saved me some issues and caught an error that wasn't obvious. Probably would've saved 10m in the interview.


Yep, depends on your memory context capacity.

For me I often don't feel any pain-points when working before about 1kloc (when doing JS), however if a project is above 500loc it's often a tad painful to resume it months later when I've started to forget why I used certain data-structures that aren't directly visible (adding types at that point is usually the best choice since it gives a refresher of the code at the same time as doing a soundness check).


The transition to where type hints become valuable or even necessary isnt about how many lines of code you have it is about how much you rely upon their correctness.

Type strictness also isnt binary. A program with lots of dicts that should be classes doesnt get much safer just because you wrote : dict[str, dict] everywhere.


> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

This is what people say, but I don't think it's correct. What is correct is that say, ten to twenty years ago, all the statically typed languages had other unacceptable drawbacks and "types bad" became a shorthand for these issues.

I'm talking about C (nonstarter for obvious reasons), C++ (a huge mess, footguns, very difficult, presumably requires a cmake guy), Java (very restrictive, slow iteration and startups, etc.). Compared to those just using Python sounds decent.

Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).


> Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).

It's common for Rust to become very difficult to iterate in.

https://news.ycombinator.com/item?id=40172033


I think Java was the main one. C/C++ are (relatively) close to the metal, system-level languages with explicit memory management - and were tacitly accepted to be the "complicated" ones, with dynamic typing not really applicable at that level.

But Java was the high-level, GCed, application development language - and more importantly, it was the one dominating many university CS studies as an education language before python took that role. (Yeah, I'm grossly oversimplifying - sincere apologies to the functional crowd! :) )

The height of the "static typing sucks!" craze was more like a "The Java type system sucks!" craze...


For me it was more the “java can’t easily process strings” craze that made it impractical to use for scripts or small to medium projects.

Not to mention boilerplate BS.

Recently, Java has improved a lot on these fronts. Too bad it’s twenty-five years late.


> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

Press "X" to doubt. Types help _a_ _lot_ by providing autocomplete, inspections, and helping with finding errors while you're typing.

This significantly improves the iteration speed, as you don't need to run the code to detect that you mistyped a varible somewhere.


Pycharm, pyflakes, et all can do most of these without written types.

The more interesting questions, like “should I use itertools or collections?” Autocomplete can’t help with.


In some fields throwing away and rewriting is the standard, and it works, more or less. I'm thinking about scientific/engineering software: prototype in Python or Matlab and convert to C or C++ for performance/deployment constraints. It happens frequently with compilers too. I think migrating languages is actually more successful than writing second versions.


The issue with moving the ship where it's passanger wants it to be makes it more difficult for new passengers to get on.

This is clearly seen with typescript and the movement for "just use JS".

Furthermore, with LLMs, it should be easier than ever to experiment in one language and use another language for production loads.


I don't think types are expensive for MVP code unless they're highly complicated (but why would you do that?) Primitives and interfaces are super easy to type and worth the extra couple seconds.


Software quality only pays off on the long time. For the short time, garbage is quick and gets the job done.

Also, in my experience, the long time for software arrives in a couple of weeks.


PHP is a great example of the convergence of interfaces. Now they have different “PSR” standards for all sorts of things. There is one for HTTP clients, formatting, cache interfaces, etc. As long as your library implements the spec, it will work with everything else and then library authors are free to experiment on the implementation and contribute huge changes to the entire ecosystem when they find a performance breakthrough.

Types seem like a “feature” of mature software. You don’t need to use them all the time, but for the people stuck on legacy systems, having the type system as a tool in their belt can help to reduce business complexity and risk as the platform continues to age because tooling can be built to assert and test code with fewer external dependencies.


Python is ubiquitous in ML, often you have no choice but to use it


> slowly become statically typed

They don't. They become gradually typed which is a thing of it's own.

You can keep the advantages of dynamic languages, the ease of prototyping but also lock down stuff when you need to.

It is not a perfect union, generally the trade-off is that you can either not achieve the same safety level as in a purely statically typed language because you need to provide same escape hatches or you need a extremely complex type system to catch the expressiveness of the dynamic side. Most of the time it is a mixture of both.

Still, I think this is the way to go. Not dynamic typing won or static typing won but both a useful and having a language support both is a huge productivity boost.


> how all the popular dynamic languages have slowly become statically typed

Count the amount of `Any` / `unknown` / `cast` / `var::type` in those codebases, and you'll notice that they aren't particularly statically typed.

The types in dynamic languages are useful for checking validity in majority of the cases, but can easily be circumvented when the types become too complicated.

It is somewhat surprising that dynamic languages didn't go the pylint way, i.e. checking the codebase by auto-determined types (determined based on actual usage).


Julia (by default) does the latter, and its terrible. It makes it a) slow, because you have to do nonlocal inference through entire programs, b) impossible to type check generic library code where you have no actual usage, c) very hard to test that some code works generically, as opposed to just with these concrete types, and finally d) break whenever you have an Any anywhere in the code so the chain of type information is broken.


In the discussion of static vs dynamic typing solutions like typescript or annotated python were not really considered.

IMHO the idea of a complex and inference heavy type system that is mostly useless at runtime and compilation but focused on essentially interactive linting is relatively recent and its popularity is due to typescript success

I think that static typing proponents were thinking of something more along the lines of Haskell/OCaml/Java rather than a type-erased system a language where [1,2] > 0 is true because it is converted to "NaN" > "0"


OTH I only came to realize that I actually like duck typing in some situations when I tried to add type hints to one of my Python projects (and then removed them again because the actually important types consisted almost entirely of sum types, and what's the point of static typing if anything is a variant anyway).

E.g. when Python is used as a 'scripting language' instead of a 'programming language' (like for writing small command line tools that mainly process text), static typing often just gets in the way. For bigger projects where static typing makes sense I would pick a different language. Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).


> Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).

I'd be interested in seeing you expand on this, explaining the ways you feel Python doesn't make the cut for programming language while doing so for scripting.

The reason I say this is because, intuitively, I've felt this way for quite some time but I am unable to properly articulate why, other than "I don't want all my type errors to show up at runtime only!"


Learn how to use the tools to prevent that last paragraph.


Note1: Type hints are hints for the reader. If you cleverly discovered that your function is handling any type of data, hint that!

Note2: From my experience, in Java, i have NEVER seen a function that consumes explicitely an Object. In Java, you always name things. Maybe with parametric polymorphism, to capture complex typing patterns.

Note 3: unfortunately, you cannot subclass String, to capture the semantic of its content.


> Java, i have NEVER seen a function that consumes explicitely an Object

So you did not see any Java code from before version 5 (in 2004) then, because the language did not have generics for the first several years it was popular. And of course many were stuck working with older versions of the language (or variants like mobile Java) without generics for many years after that.


Exactly, I have never seen such codes [*].

Probably because the adoption of the generics has been absolutely massive in the last 20 years. And I expect the same thing to eventually happen with Typescript and [typed] Python.

[*]: nor have I seen EJB1 or even EJB2. Spring just stormed them, in the last 20 years.


An example of a function in Java that consumes a parameter of type Object is System.out.println(Object o)

Many such cases.


Sounds to be more of a symptom of the types of programs and functions you have written, rather than something inherent about types or Python. I've never encountered the type of gerry-mangled scenario you have described no matter how throwaway the code is.


If you like dynamic types have you considered using protocols? They are used precisely to type duck typed code.


> all the popular dynamic languages have slowly become statically typed

I’ve heard this before, but it’s not really true. Yes, maybe the majority of JavaScript code is now statically-typed, via Typescript. Some percentage of Python code is (I don’t know the numbers). But that’s about it.

Very few people are using static typing in Ruby, Lua, Clojure, Julia, etc.


Types become very useful when the code base reaches a certain level of sophistication and complexity. It makes sense that for a little script they provide little benefit but once you are working on a code base with 5+ engineers and no longer understand every part of it having some more strict guarantees and interfaces defined is very very helpful. Both for communicating to other devs as well as to simply eradicate a good chunk of possible errors that happen when interfaces are not clear.


How many people are using Ruby, Lua, Clojure, Julia, etc.?


Fair enough, apart from Ruby they’re all pretty niche.

OTOH I’m not arguing that most code should be dynamically-typed. Far from it. But I do think dynamic typing has its place and shouldn’t be rejected entirely.

Also, I would have preferred it if Python had concentrated on being the best language in that space, rather than trying to become a jack-of-all-trades.


I have my doubts about majority of JavaScript being TypeScript.


You’re probably right. RedMonk [0] shows JavaScript and TypeScript separately and has the former well above the latter.

[0] https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...


Even if they're not written as TypeScript, there are usually add on definitions like "@types/prettier" and the like.


I disagree for Julia, but that probably depends on the definition of static typing.

For the average Julia package I would guess, that most types are statically known at compile time, because dynamic dispatch is detrimental for performance. I consider, that to be the definition of static typing.

That said, Julia functions seldomly use concrete types and are generic by default. So the function signatures often look similar to untyped Python, but in my opinion this is something entirely different.


At least in ruby theres mayor code bases using stripes sorbet and the official RBS standard for type hints. Notably its big code bases with large amounts of developers, fitting in with the trend most people in this discussion point to.


My last job was working at a company that is notorious for Ruby and even though I was mostly distant from it, there seemed to be a big appetite for Sorbet there.


The big difference between static typing in Python and Ruby is that Guido et al have embraced type hints, whereas Matz considers them to be (the Ruby equivalent of) “unpythonic”. Most of each language’s community follows their (ex-)BDFL’s lead.


PHP as well has become statically typed.

All the languages you name are niche languages compared to Python, JS (/ TS) and PHP. Whether you like it or not.


I think you're ignoring how for some of us, gradual typing, is a far better experience than languages with static types.

For example what I like about PHPStan (tacked on static analysis through comments), that it offers so much flexibility when defining type constraints. Can even specify the literal values a function accepts besides the base type. And subtyping of nested array structures (basically support for comfortably typing out the nested structure of a json the moment I decode it).


Not ignoring, I just didn't write an essay. In all that time working with TypeScript there was very little that I found to be gradually typed, it was either nothing or everything, hence my original comment. Sure some things might throw in a bunch of any/unknown types but those were very much the rarity and often some libraries were using incredibly complicated type definitions to make them as tight as possible.


Worked with python, typescript and now php, seems that phpstan allows this gradual typing, while typescript kinda forces you to start with strict in serious projects.


Coming from Java extreme verbosity, I just loved the freedom of python 20 years ago. Working with complex structures with mixed types was a breeze.

Yes, it was your responsibility to keep track of correctness, but that also taught me to write better code, and better tests.


Writing tests is harder work than writing the equvalent number of type hints though


Type hints and/or stronger typing in other languages are not good substitutes for testing. I sometimes worry that teams with strong preferences for strong typing have a false sense of security.


People write tests in statically typed languages too, it's just that there's a whole class of bugs that you don't have to test for.


Hints are not sufficient, you’ll need tests anyway. They somewhat overlap.


Writing and maintaining tests that just do type checking is madness.

Dynamic typing also gives tooling such as LSPs and linters a hard time figuring out completions/references lookup etc. Can't imagine how people work on moderate to big projects without type hints.


AI tab-complete & fast LSP implementations made typing easy. The tools changed, and people changed their minds.

JSON's interaction with types is still annoying. A deserialized JSON could be any type. I wish there was a standard python library that deserialized all JSON into dicts, with opinionated coercing of the other types. Yes, a custom normalizer is 10 lines of code. But, custom implementations run into the '15 competing standards' problem.

Actually, there should be a popular type-coercion library that deals with a bunch of these annoying scenarios. I'd evangelize it.


Type hints / gradual typing is crucially different from full static typing though.

It’s valid to say “you don’t need types for a script” and “you want types for a multi-million LOC codebase”.


Static typing used to be too rigid and annoying to the point of being counterproductive. After decades of improvement of parsers and IDEs they finally became usable for rapid development.


Everything goes in cycles. It has happened before and it will happen again. The softward industry is incredibly bad at remembering lessons once learned.


That's because many do small things that don't really need it, sure there are some people doing larger stuff and are happy to be the sole maintainer of a codebase or replace the language types with unit-test type checks.

And I think they can be correct for rejecting it, banging out a small useful project (preferably below 1000 loc) flows much faster if you just build code doing things rather than start annotating (that quickly can be come a mind-sinkhole of naming decisions that interrupts a building flow).

However, even less complex 500 loc+ programs without typing can become a pita to read after the fact and approaching 1kloc it can become a major headache to pick up again.

Basically, can't beat speed of going nude, but size+complexity is always an exponential factor in how hard continuing and/or resuming a project is.


Thing is, famous dynamic languages of the past, Common Lisp, BASIC, Clipper, FoxPro, all got type hints for a reason, then came a new generation of scripting languages made application languages, and everyone had to relearn why the fence was in the middle of the field.


I think both found middle ground. In Java you don’t need to define the type of variables within the method. In Python people have learned types in method arguments is a good thing.


> After decades of people saying

You have to admit that the size and complexity of the software we write has increased dramatically over the last few "decades". Looking back at MVC "web applications" I've created in the early 2000s, and comparing them to the giant workflows we deal with today... it's not hard to imagine how dynamic typing was/is ok to get started, but when things exceed one's "context", you type hints help.


I like static types but advocating for enforcing them in any situation is different. Adding them when you need (Python currently) seems a better strategy than forcing you to set them always (Typescript is in between as many times it can determine them).

Many years ago I felt Java typing could be overkill (some types could have been deduced from context and they were too long to write) so probably more an issue about the maturity of the tooling than anything else.


What I would need is a statically typed language that has first class primitives for working with untyped data ergonomically.

I do want to be able to write a dynamically typed function or subsystem during the development phase, and „harden” with types once I’m sure I got the structure down.

But the dynamic system should fit well into the language, and I should be able to easily and safely deal with untyped values and convert them to typed ones.


So… Typescript?


Yes, the sad part is that some people experienced early TypeScript that for some reason had the idea of forcing "class" constructs into a language where most people wasn't using or needing them (and still aren't).

Sometimes at about TypeScript 2.9 finally started adding constructs that made gradual typing of real-world JS code sane, but by then there was a stubborn perception of it being bad/bloated/Java-ish,etc despite maturing into something fairly great.


I like Typescript :)


The need for typing changed, when the way the language is used changed.

When JavaScript programs were a few hundred lines to add interactivity to some website type annotationd were pretty useless. Now the typical JavaScript project is far larger and far more complex. The same goes for python.


dynamically-typed languages were typically created for scripting tasks - but ended up going viral (in part due to d-typing), the community stretched the language to its limits and pushed it into markets it wasn't designed/thought for (embedded python, server-side js, distributed teams, dev outsourcing etc).

personally i like the dev-sidecar approach to typing that Python and JS (via TS) have taken to mitigate the issue.


Javascript is no longer was just scripting. Very large and complex billion dollar apps were being written in pure Javascript. It grew up.

I guess Python is next.


Next stop is to agree that JSON is really NOT the semantic data exchange serialization for this "properly typed" world.


Then what is?

Everybody knows the limitations of JSON. Don't state the obvious problem without stating a proposed solution.


The RDF structure is a graph of typed instances of typed objects, serializable as text.

Exchanging RDF, more precisely its [more readable] "RDF/turtle" variant, is probably what will eventually come to the market somehow.

Each object of a RDF structure has a global unique identifier, is typed, maintains typed links with other objects, have typed values.


For an example of RDF being exchanged between a server and a client, you can test

https://search.datao.net/beta/?q=barack%20obama

Open your javascript console, and hover the results on the left hand side of the page with your mouse. The console will display which RDF message triggered the viz in the center of the page.

Update: you may want to FIRST select the facet "DBPedia" at the top of the page, for more meaningful messages exchanged.

Update2: the console does not do syntax higlighting, so here is the highlighted RDF https://datao.net/ttl.jpg linked to the 1st item of " https://search.datao.net/beta/?q=films%20about%20barack%20ob... "


That's a circular argument. What serialization format would you recommend? JSON?


Turtle directly.

JSON forces you to fit your graph of data into a tree structure, that is poorly capturing the cardinalities of the original graph.

Plus of course, the concept of object type is not existing in JSON.


Thank you, I did not realize that RDF has its own serialization format. I'm reading about it now.


I think that the practically available type checkers evolved to a point where many of the common idioms can be expressed with little effort.

If one thinks back to some of the early statically typed languages, you'd have a huge rift: You either have this entirely weird world of Caml and Haskell (which can express most of what python type hints have, and could since many years), and something like C, in which types are merely some compiler hints tbh. Early Java may have been a slight improvement, but eh.

Now, especially with decent union types, you can express a lot of idioms of dynamic code easily. So it's a fairly painless way to get type completion in an editor, so one does that.


Trends change. There is still no hard evidence that static types are net positive outside of performance.


Huh. It's almost like these people didn't know what they were talking about. How strange.


Well, we do coalesce on certain things... some static type languages are dropping type requirements (Java and `var` in certain places) :D


There's no dropping of type requirements in Java, `var` only saves typing.

When you use `var`, everything is as statically typed as before, you just don't need to spell out the type when the compiler can infer it. So you can't (for example) say `var x = null` because `null` doesn't provide enough type information for the compiler to infer what's the type of `x`.


> `var` only saves typing.

this is a lovely double entendre


var does absolutely nothing to make Java a less strictly typed language. There is absolutely no dropping of the requirement that each variable has a type which is known at compile time.

Automatic type inference and dynamic typing are totally different things.


I have not written a line of Java in at least a decade, but does Java not have any 'true' dynamic typing like C# does? Truth be told, the 'dynamic' keyword in C# should only be used in the most niché of circumstances. Typically, only practitioners of Dark Magic use the dynamic type. For the untrained, it often leads one down the path of hatred, guilt, and shame. For example:

dynamic x = "Forces of Darkness, grant me power";

Console.WriteLine(x.Length); // Dark forces flow through the CLR

x = 5;

Console.WriteLine(x.Length); // Runtime error: CLR consumed by darkness.

C# also has the statically typed 'object' type which all types inherit from, but that is not technically a true instance of dynamic typing.


Same nonsense repeated over and over again... There aren't dynamic languages. It's not a thing. The static types aren't what you think they are... You just don't know what you are saying and your conclusion is just a word salad.

What happened to Python is that it used to be a "cool" language, whose community liked to make fun of Java for their obsession with red-taping, which included the love for specifying unnecessary restrictions everywhere. Well, just like you'd expect from a poorly functioning government office.

But then everyone wanted to be cool, and Python was adopted by the programming analogue of the government bureaucrats: large corporations which treat programming as a bureaucratic mill. They don't want fun or creativity or one-of bespoke solutions. They want an industrial process that works on as large a scale as possible, to employ thousands of worst quality programmers, but still reliably produce slop.

And incrementally, Python was made into Java. Because, really, Java is great for producing slop on an industrial scale. But the "cool" factor was important to attract talent because there used to be a shortage, so, now you have Python that was remade to be a Java. People who didn't enjoy Java left Python over a decade ago. So that Python today has nothing in common with what it was when it was "cool". It's still a worse Java than Java, but people don't like to admit defeat, and... well, there's also the sunk cost fallacy: so much effort was already spent at making Python into a Java, that it seems like a good idea to waste even more effort to try to make it a better Java.


Yeah, this is the lens through which I view it. It's a sort of colonization that happens, when corporations realize a language is fit for plunder. They start funding it, then they want their people on the standards boards, then suddenly the direction of the language is matched very nicely to their product roadmap. Meanwhile, all the people who used to make the language what it was are bought or pushed out, and the community becomes something else entirely.


I'm reasonably versed in Haskell and my response would be that it shouldn't make that much difference to you what they've written in here. I've yet to see any code in the wild using the backpack extension.


LFS and git-annex have subtly different use cases in my experience. LFS is for users developing something with git that has large files in the repo like the classic game development example. git-annex is something you'd use to keep some important stuff backed up which happens to involve large files, like a home folder with music or whatever in it. In my case I do the latter.


What it works really well at is storing research data. LFS can't upload to arbitrary webdav/S3/sharepoint/other random cloud service.


I think the fact that it's a horrible language is a big contributor to the frameworks being horrible as well. There's all these incidental sacrifices that have to be made which bleed through into everything else, like handling null and undefined.


I always maintain that this is just familiarity, Haskell is in truth quite a simple language. It's just that the way it works isn't similar to the languages most people have started with.


I believe there's a strange boundary around the idea of simple vs easy (to quote rich hickey) and I don't know how to call it.. (or if somebody named it before)

functional and logical languages are indeed very simple, small core, very general laws.. (logic, recursion, some types) but grokking this requires unplugging from a certain kind of reality.

Most people live in the land of tools, syntax and features .. they look paradoxically both simpler than sml/haskell so people are seduced by them, yet more complex at the same time (class systems are often large and full of exceptions) but that also makes it like they're learning something advanced, (and familiar, unlike greek single variables and categ-oids :).


People intuitively expect things to happen imperatively (and eagerly). Imperativeness is deeply ingrained in our daily experience, due to how we interact with the world. While gaining familiarity helps, I’m not convinced that having imperative code as the non-default case that needs to be marked specially in the code and necessitates higher-order types is good ergonomics for a general-purpose programming language.


> People intuitively expect things to happen imperatively (and eagerly).

Eagerly? Yes. Imperatively? Not as much as SW devs tend to think.

When the teacher tells you to sort the papers alphabetically, he's communicating functionally, not imperatively.

When the teacher tells you to separate the list of papers by section, he's communicating functionally, not imperatively.

When he tells you to sum up the scores on all the exams, and partition by thresholds (90% and above is an A, 80% above and above is a B, etc), he's communicating functionally, not imperatively.

No one expects to be told to do it in a "for loop" style:

"Take a paper, add up the scores, and if it is more than 90%, put it in this pile. If it is between 80-90%, put it in this pile, ... Then go and do the same to the next paper."

People usually don't talk that way.


Nope. The fact that he's telling you a high-level command is irrelevant. (If you didn't know what “sort the papers” means, he'd have to tell you in more detail; it's just the difference between calling your built-in sort routine or coding it.)

Anyway: He's telling you to do something, and you do it. It doesn't get more imperative than that.


You’re talking about what vs. how, but imperative vs. pure-functional is both about the how, not the what.

When you’re explaining someone how to sort physical objects, they will think in terms of “okay I’ll do x [a physical mutable state change] and then I’ll have achieved physical state y, and then I’ll do z (etc.)”.


Well, declaratively, not functionally. But point mostly stands


Familiarity is a part, but abstract reasoning is fundamentally harder than concrete.

Understanding the map signature in Haskell is more difficult than any C construct. Now do IO monad.


> Understanding the map signature in Haskell is more difficult than any C construct.

This is obviously false. The map type signature is significantly easier to understand than pointers, referencing and dereferencing.

I am an educator in computer science - the former takes about 30-60 seconds to grok (even in Haskell, though it translates to most languages, and even the fully generalised fmap), but it is a rare student that fully understands the latter within a full term of teaching.


Are the students who failed the pointer class the same ones in the fmap class?

I didn’t say “using map” I said understanding the type signature. For example, after introducing map can you write its type signature? That’s abstract reasoning.

Pointers are a problem in Haskell too. They exist in any random access memory system.


Whether pointers exist is irrelevant. What matters is if they're exposed to the programmer. And even then it mostly only matters if they're mutable or if you have to free them manually.

Sure, IORef is a thing, but it's hardly comparable to the prevalence of pointers in C. I use pointers constantly. I don't think I've ever used an IORef.


If you have an array and an index, you have all the complexity of pointers. The only difference is that Haskell will bounds check every array access, which is also a debug option for pointer deref.


You are very disconnected from learners. They get confused over mutability, not bounds checking, which is not an option for raw pointers consistently


Hard to believe that “learners ... get confused over mutability” more than functional programming when millions of middle-schoolers grokked the idea of “mutability” in the form of variables in Basic, while I (and at a guess, at least thousands of other experienced programmers) have no fucking idea about pretty much all the stuff in most of the tens or hundreds of articles and discussions like this that we've seen over the years. Just plain stating that “mutability is more difficult” without a shred of evidence ain't gonna fly.


That’s an unfair comparison because these are two unrelated concepts. In many languages, pointers are abstracted away anyway. Something more analogous would be map vs a range loop.


And I'd say the average React or Java developer these days understands both pretty well. It's the default way to render a list of things in React. Java streams are also adopted quite well in my experience.

I wouldn't say one is more difficult than the other.

IMO `map` is a really bad example for the point that OP is trying to make, since it's almost everywhere these days.

FlatMap might be a better example, but people call `.then` on Promises all the time.

I think it might just be familiarity at this point. Generally, programming has sort of become more `small f` functional. I'd call purely functional languages like Haskell Capital F Functional, which are still quite obscure.


Well, he responded to someone saying the type signature of map was more complicated than ANY C construct.


Fair point


`map` aint so bad...

    map :: (a -> b) -> [a] -> [b]
I suppose an absolute beginner would need someone to explain that Haskell type signatures can be read by slicing at any of the top level arrows, so that becomes either:

> Given a function from `a` to `b`, return a function from a `list of as` to a `list of bs`.

or:

> Given a function from `a` to `b` and a `list of as`, return a `list of bs`.

I find the first to be the more intuitive one: it turns a normal function into a function that acts on lists.

Anecdotally, I've actually found `map` to be one of the most intuitive concepts in all of programming. It was only weird until I'd played around with it for about 10m, and since then I've yet to be surprised by it's behavior in any circumstance. (Although I suppose I haven't tried using it over tricky stuff like `Set`.)

`fmap` is admittedly a bit worse...

    fmap :: Functor f => (a -> b) -> f a -> f b
But having learned about `map` above, the two look awfully similar. Sure enough the same two definitions above still work fine if you replace `list` with this new weird `Functor` thing. Then you look up `Functor` you learn that it's just "a thing that you can map over" and the magic is mostly gone. Then you go to actually use the thing and find that in Haskell pretty much everything is a `Functor` that you can `fmap` over and it starts feeling magical again.


You and I have a math part of our brain that appreciate the elegance from the algebraic structure.

I’m saying that thing you did where you start representing concepts by letters which can be populated by concrete objects is not a skill most people have.


Oh. Yeah that's a good point.


Maybe at its core, but Haskell in the wild is monstrously complex because of all the language extensions. Many different people use different sets of extensions so you have to learn them to understand what’s going on!


Not really, the vast majority of extensions just relax unnecessary restrictions. And these days it's easy to just enable GHC2021 or GHC2024 and be happy.


Yup, that felt incredible even at the time, let alone now.


Conversely I live in Gillingham, where there's several references to him and his history.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: