Hacker Newsnew | past | comments | ask | show | jobs | submit | IgorPartola's commentslogin

So think of it this way: you want to avoid calling malloc() to increase performance. JavaScript does not have the semantics to avoid this. You also want to avoid looping. JavaScript does not have the semantics to avoid it.

If you haven’t had experience with actual performant code JS can seem fast. But it’s is a Huffy bike compared to a Kawasaki H2. Sure it is better than a kid’s trike but it is not a performance system by any stretch of the imagination. You use JS for convenience, not performance.


IIRC V8 actually does some tricks under the hood to avoid malocs which is why Node.js can be be unexpectedly fast (I saw some benchmarks where it was only 4x of equivalent C code) - for example it recycles objects of the same shape (which is why it is beneficial not to modify object structure in hot code paths).

Hidden classes is a whoooole thing. I’ve switched several projects to Maps for lookup tables so as not to poke that bear. Storage is the unhappy path for this.

(to be fair the memory manager reuses memory, so it's not calling out to malloc all the time, but yes a manually-managed impl. will be much more efficient)

Whichever way you manage memory, it is overhead. But the main problem is the language does not have zero copy semantics so lots of things trigger a memcpy(). But if you also need to call malloc() or even worse if you have to make syscalls you are hosed. Syscalls aren’t just functions, they require a whole lot of orchestration to make happen.

JavaScript engines also are also JITted which is better than a straight interpreter but except microbenchmarks worse than compiled code.

I use it for nearly all my projects. It is fine for most UI stuff and is OK for some server stuff (though Python is superior in every way). But would never want to replace something like nginx with a JavaScript based web server.


V8 does a lot of things to prevent copies. If you create two strings and concat them and assigned to a third var, no copy happens (c = a+b, c is a rope). Objects are by reference... Strings are interned.. the main gotcha with copies is when you need to convert from internal representation (utf16) to utf8 for outputs, it will copy then.

What a sad miserable cult it is, to be obsessed with performance, and to so desperately need to beat up on a language. I don't know if people spewing this shit are aware of how dull their radical extremism is?

You really don't have to go far down the techempower benchmarks to get to JS. Losing let's just say 33% performance versus an extremely tuned much more minimalist framework on what is practically a micro benchmark is far from the savage fall from civilization & decadence of man, deserving far less scorn and shame than what the cult of js hate fuels their fires on.

I could go on about how JS has benefitted from having incredibly work poured into it, because it is the most popular runtime on the planet, because it's everywhere, because there was a hot war for companies to try to beat each other on making their js runtime good (one of the only langs with many runtimes is interesting). It's a bit excessive & maybe it should have been a more deserving language perhaps (we can debate endlessly), but man... It just doesn't matter. Stressing out, being so mean and nasty (huffy vs Kawasaki, @hoppp's even worse top-voted half sentence snark takedown), trying to impress upon people this image that it's all just so bad: I think there's such a ridiculous off kilter tilting way too hard, and far less of it is about good reasons and valid concerns, and so so so much of it is this bandwagon of negative energy, is radical overconcern.

Like most tools & languages, it's what you do with it. With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link. We are trying to build experiences that work across systems, with significant user perceived latency sometimes. And so data & systems architecture matters a lot. Trying to get keep the client primed with up to date data that it needs to render, doing client work without blocking/while maintaining user responsiveness, are hard fun multi-threaded (webworker) challenges, for those folks that care.

And those challenges aren't unique to js. Other languages have similar challenges. Trying to multithread a win32 UI to avoid blocking also was a bit of a nightmare, working off main thread. Doing data sync is a nightmare. There's so many ways to get stuff wrong. And I think a lot of the js code out there does get it wrong. But we experience hundreds or thousands of websites a week, and crucial tools we use that are js client-server are badly architected. I sympathize with why js has such a bad rap. To me it usually seems like architectural app design issues, that companies are too busy building features to really consider the core, to establish data architectures that don't block or lag. And that's not a specific js problem.

There are faster systems yes, but man, the energy being poured into blaming the worst of the world on JS seem ridiculous to me, like such a sad drag, that avoids any interest or fascination on what is so so interesting & so worthy. The language is the least remarkable part of the equation, practically doesn't matter, and the marginal performance levels are (with some exception for very specific cases) almost never a remotely critical factor. Just so so so tired, knowing such enormous and pointless frivolous scorn and disdain is going to overwhelm all conversations, going to take over every thread forever, when it matters so so little, is so very rarely a major determinant.

JS does not have to be the thought terminating cliche to every thread ever (and humbly, I'd assert it doesn't deserve that false conviction either. But either way!).


I was a Java programmer out of school. My first specialization became performance. It’s the same problem with JavaScript. There’s at least an order of magnitude speed difference between using your head and just vomiting code, and probably more like a factor of forty. Then use some of the time you saved to do deep work to hit an even two orders of magnitude and it can be fast. If you’re not an idiot that is.

bcantril has an anecdote about porting some code to rust and it was way faster than he expected. After digging he found it was using btrees under the hood, which was a big part of the boost. B-trees being a giant pain in the ass to write in the donor language but a solved problem in Rust. It’s a bit like that. You move the effort around and it evens out.


> With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link.

You can't possibly be serious.


What software had a good % of computer users using in the year 2000 that was client-server? What % of developers were doing client server?

What % were doing client server a decade latter?

To my view, it had largely been age of Personal Computing before the web happened. There were enterprise softwares some with client-server architectures doing important things sure, but very very very few people saw that in action, experienced the constraints of what client server meant. The web was the new thing changing the face of software for the world, and brought client-server to mainstream (consumers and devs).

I can't understand (and you have contributed nothing to clarify) your scathing shitty low grade unqualified hate. Are you at all serious? Why such a un hackerly comment that provokes no thought or raises no interesting ideas?


2000 was the peak of the dot-com boom. In the US, half of Silicon Valley was developing for client-server and nearly every person with a computer was using client-server applications.

To give a clearer example, Napster, where I worked, had 60 million users, mostly in the US and we were far from the largest.

The cruddy cheap phones one might complain today is several times more powerful than the servers we used then and the dodgy connection today are downright sublime compared to the dial-up modems over noisy phone lines.


[flagged]


Architecture is far more important than runtime speed. (People are so easily swayed by "JS SUCKS LOL" because of experiences with terrible & careless client-server architectures, far more than js itself being "slow".)

The people ripping into js suck up the interesting energies, and bring nothing of value.


If we are discussing C10K we are by definition discussing performance. JavaScript does not enter this conversation any more than BASIC. Yes of course architecture matters. Nobody has been arguing otherwise. But the point is that if you take the best architecture and implement it in the best available JS environment you are still nowhere close to the same architecture implemented in a systems language in terms of performance. You are welcome to your wishful thinking that you do not need to learn anything besides JavaScript when it comes to this conversation. But no matter how hard you argue it will continue being wishful thinking.

We are discussing tech where having a custom userland TCP stack is not just a viable option but nearly a requirement and you are talking about using a lighter JS framework. We are not having the same conversation. I highly recommend you get off Dunning-Kruger mountain by writing even a basic network server using a systems language to learn something new and interesting. It is more effort than flaming on a forum but much more instructive.


Techempower isn't perfect, but a 33% loss of speed is really not bad versus best of the best web stacks. Your attempt to belittle is the same tired heaping scorn, when the data shows otherwise. But it's so cool to hate!

Libuv isn't the most perfect networking stack in the planet, no. It could be improved. We could figure out new ways to get io-uring or some eBPF based or even dpdk bypass userland stack running. Being JS does not limit what you do for networking, in any way. At all. It adds some overhead to the techniques you choose, requires some glue layer. So yes, some cpu and memory efficiency losses. And we can debate whether thats 20% or 50% or more. Techempower shows it's half an order of magnitude worse (10^0.5, 33%). Techempower is a decent proxy for what is available, what you get, without being extremely finicky.

Maybe that is the goal, maybe this really is a shoot for the moon effort where you abandon all existing work to get extreme performance, as c10k was, but that makes the entire lesson not actually useful or practical for almost everyone. And if you want to argue against techempower being a proxy for others, for doing extreme work & what is possible at the limit, you have to consider a more extreme js networking than what comes out of box in node.js or Deno too, into a custom engine there too.

It's serious sad cope to pretend like it's totally different domains, that js is inherently never going to have good networking, that it can't leverage the same networking techniques you are trying to vaunt over js. The language just isn't that important, the orders of magnitude (<<1 according to the only data/evidence in this thread) are not actually super significant.


Look you seem to have made up your mind and are unwilling to listen to knowledge or experience. The tech industry does not do well with willful ignorance so I wish you luck with all that.

> unwilling to listen to knowledge and experience

I'm more than happy to listen! But there's not really content or lessons happening. It's just an endless stream of useless garbage no effort dunking and "trust me bro."

I would be far less offended by the "js sucks lol" thought terminating cliche & dunking if they came at all correct. But I'm the only one citing any evidence here and your assertions that better networking is impossible in js, js could never userland bypass, just don't wash or make sense.

I don't think js is ideal to spend all your time optimizing for. It's a bit ridiculous to imagine a userland kernel bypass like dpdk that's married to v8. But it doesn't seem inconceivable at all.

Assume both of us are being willfully to our ends. I'd way rather be willfully of possibility, than to be so ironclad locked down assured beyond belief willful of impossibility. I think that's an insult to the hacker spirit. And it's a sad fate that every conversation has the same useless loud bellowing that JS is impossibly bad; this thought terminating cliche is a wicked evil to inflict on the world, adding chiefly to din. Especially when it never bring any evidence, when it's so self assured.


You are clearly engaged in sealioning.

No, I'm pointing out that you have no evidence at all & and aren't trying to speak reasonably or informatively, that you are bullying a belief you have without evidence.

I'm presenting evidence. I'm clarifying as I go. I'm adding points of discussion. I'm building a case. I just refuse to be met and torn down by a bunch of useless hot air saying nothing.


33% loss of speed is really not bad if you don't care about 33% loss of speed.

Nothing about current js ecosystem screams good architecture it’s hacks on hacks and a culture of totally ignoring everything outside of your own little bubble. Reminds me of early 2000s javabeans scene

Reminds me of early 2000s JS scene, in fact. Anything described as "DHTML" was a horrid hack. Glad to see Lemmings still works, though, at least in Firefox dev edition. https://www.elizium.nu/scripts/lemmings/

Unqualified broadscale hate making no assertions anyone can look at and check? This post is again such wicked nasty Dark Side energy, fueled by such empty vacuous nothing.

It was an incredible era that brought amazing aliveness to the internet. Most of it was pretty simple & direct, would be more intelligible to folks today than today would be intelligible & legible to folks from them.

This behavior is bad & deserves censure. The hacker spirit is free to be critical, but in ways that expand comprehension & understanding. To classify a whole era as a "horrible hack" is injurous to the hacker spirit.


after dealing with DHTML in late 90s, i decided to check out from frontends/javascript. i had feeling that all this area will be rather turbulent

It has been long enough that C10K is not in common software engineer vernacular anymore. There was a time when people did not trust async anything. This was also a time when PHP was much more dominant on the web, async database drivers were rare and unreliable, and you had to roll your own thread pools.

Heh I bought at $20 and sold at $1000. It was only 1 BTC. I bought I think 2 or 3 and mined a bit too but back then exchanges got hacked all the time. Honestly I do not feel bad. Every pizza I bought could have been tens of thousands too if I knew how BTC would turn out (or Nvidia or Netflix or the outcome of any major sporting event, etc.). I made a profit and it helped at the time. I am not sore about it because I also do not know how many dumb investments I passed up that would have just lost money.

You would think so, yet here I am sitting with a node_modules full of crud placed there by npm, waiting for the next supply chain attack.

npm isn't the issue there it's the ts\js community and their desire to use a library for everything. in communities that do not consider dependencies to be a risk you will find this showing up in time.

The node supply chain attacks are also not unique to node community. you see them happening on crates.io and many other places. In fact the build time scripts that cause issues on node modules are probably worse off with the flexibility of crate build scripts and that they're going to be harder to work around than in npm.


I don't see how that follows.

uv doesn't exactly stop python package supply chain attacks...


That argument is FUD. The people who created the NPM package manager are not the people who wrote your dependencies. Further, supply chain attacks occur for reasons that are entirely outside NPM's control. Fundamentally they're a matter of trust in the ecosystem — in the very idea of installing the packages in the first place.

Lack of stronger trust controls are part of the larger issue with npm. Pip, Maven and Go are not immune either but they do things structurally better to shift the problem.

Go: Enforces global, append-only integrity via a checksum database and version immutability; once a module version exists, its contents cannot be silently altered without detection, shifting attacks away from artifact substitution toward “publish a malicious new version” or bypass the proxy/sumdb.

Maven: Requires structured namespace ownership and signed artifacts, making identity more explicit at publish time; this raises the bar for casual impersonation but still fundamentally trusts that the key holder and build pipeline were not compromised.


For Go, there are more impactful features: minimal version selection and the culture of fewer, but larger dependencies.

Your average Go project likely has 10x fewer deps than a JS project. Those deps will not get auto-updated to their latest versions either. Much lower attack surface area.


I don't think cargo is much better in that respect. It's what happens when instead of a decent standard library and a few well established frameworks you decide that every single little thing must be a separate project.

Rust is also a systems language. I am still wrapping my mind around why it is so popular for so many end projects when its main use case and goals were basically writing a browser a maybe OS drivers.

But that’s precisely why it is good for developer tools. And it turns out people who write systems code are really damn good at writing tools code.

As someone who cut my teeth on C and low level systems stuff I really ought to learn Rust one of these days but Python is just so damn nice for high level stuff and all my embedded projects still seem to require C so here I am, rustless.


If python's painpoints don't bother you enough (or you are already comfortable with all the workarounds,) then I'm not sure Rust will do much for you.

What I like about Rust is ADTs, pattern matching, execution speed. The things that really give me confidence are error handling (right balance between "you can't accidentally ignore errors" of checked exceptions with easy escape hatches for when you want to YOLO,) and the rarity of "looks right, but is subtly wrong in dangerous ways" that I ran into a lot in dynamic languages and more footgun languages.

Compile times suck.


I rarely if ever encounter bugs that type checking would have fixed. Most common types of bugs for me are things like forgetting that two different code paths access a specific type of database record and when they do both need to do something special to keep data cohesive. Or things like concurrency. Or worst of all things like fragile subprocesses (ffmpeg does not like being controlled by a supervisor process). I think all in all I have encountered about a dozen bugs in Python that were due to wrong types over the past 17 years of writing code in this language. Maybe slightly more than that in JS. The reason I would switch is performance.

Same. I like the type hints -- they're nice reminders of what things are supposed to be -- but I've essentially ~never run into bugs caused by types, either. I've been coding professionally in Python for 10+ years at this point.

It just doesn't come up in the web and devtools development worlds. Either you're dealing with user input, which is completely untrusted and has to be validated anyways, or you're passing around known validated data.

The closest is maybe ETL pipelines, but type checking can't help there either since your entire goal is to wrestle with horrors.


You can validate user input with types using stuff like typedload (which i wrote) or similar runtime type checkers.

“The user can choose between starting their new policy on the first day of employment, the first day of the fiscal year, on a specific date, or some number of days after their prior policy expires. If they choose the first day of the fiscal year, the user must specify when their company’s fiscal year starts. If they choose a specific date they must choose a date that is after the first business day of the next month and no later than December 31st of the year that month belongs to. If the user specified some number of months after their current policy expired the user must provide a policy number and the number of days no less than 1 and no more than 365.”

Type validation can help with some of that but at some point it becomes way easier to just use imperative validation for something like this. It turns out that validating things that are easy is easy no matter what you do, and validating complex rules that were written by people who think imperatively is almost impossible to do declaratively in a maintainable way.


attrs and dataclasses let you define custom validators that can be used together with typedload…

For me, ADT’s and pattern matching are about expressivity not type checking. Type checking really helps with refactoring quickly. If we’re measuring experience with years, I was a rubyist for over a decade and have written python for another 5 years after that, so I have some dynamic language bona fides.

I write scripts in rust as a replacement for bash. Its really quite good at it. Aside from perl, its the only scripting language that can directly make syscalls. Its got great libraries for: parsing, configuration management, and declarative CLIs built right into it.

Sure its a little more verbose than bash one-liners, but if you need any kind of error handling and recovery, its way more effective than bash and doesn't break when you switch platforms (i.e. mac/bsd utility incompatibilities with gnu utilities).

My only complaint would be that dealing with OsString is more difficult than necessary. Way to much of the stdlib encourages programmers to just do "non-utf8 paths don't exist" and panic/ignore when encountering one. (Not a malady exclusive to rust, but I wish they'd gotten it right)

Example I had handy: <https://gist.github.com/webstrand/945c738c5d60ffd7657845a654...>


Paths are hard because they usually look like printable text, but don't have to be text. POSIX filenames are octet strings not containing 0x2F or 0x00. They aren't required to contain any "printable" characters, or even be valid text in any particular encoding. Most of the Rust stdlib you're thinking of is for handling text strings, but paths aren't text strings. Python also has the same split between Pathlib paths & all other strings.

Yeah, the issue is that there are no utilities for manipulating OsStrings, like for splitting, regex matching, or formatting OsStrings/Paths.

For instance the popular `fd` utility can't actually see files containing malformed utf-8, so you can hide files from system administrators naively using those tools by just adding invalid utf-8.

    touch $'example\xff.txt'
    fd 'example.*txt' // not found
    fd -F $'example\xff.txt' // fails non-utf8
The existing rust libraries for manipulating OsString push people towards ignorance or rejection of non-utf8 filenames and paths.

I mean, you can always replace Python with LuaJIT or Perl... or Nim... or Crystal... or Odin... or with Rust....

Why non-thinking model? Also 5-20 minutes?! I guess I don’t know what kind of code you are writing but for my web app backends/frontends planning takes like 2-5 minutes tops with Sonnet and I have yet to feel the need to even try Opus.

I probably write overly detailed starting prompts but it means I get pretty aligned results. It does take longer but I try to think through the implementation first before the planning starts.

In my experience sonnet > opus, so it’s not surprise you don’t “need” opus. They charge a premium on sonnet now instead

I sort of see what you are getting at but I am still a bit confused:

If I have a program that based on the input given to it runs some number of recursions of a function and two compilers of the language, can I compile the program using both of them if compiler A has PTC and compiler B does not no matter what the actual program is? As in, is the only difference that you won’t get a runtime error if you exceed the max stack size?


That is correct, the difference is only visible at runtime. So is the difference between garbage collection (whether tracing or reference counting) and lack thereof: you can write a long-lived C program that calls malloc() throughout its lifetime but never free(), but you’re not going to have a good time executing it. Unless you compile it with Fil-C, in which case it will work (modulo the usual caveats regarding syntactic vs semantic garbage).

I think features of the language can make it much easier (read: possible) for the compiler to recognize when a function is tail call optimizable. Not every recursion will be, so it matters greatly what the actual program is.

It is a feature of the language (with proper tail calls) that a certain class of calls defined in the spec must be TCOd, if you want to put things that way. It’s not just that it’s easier for the compiler to recognize them, it’s that it has to.

(The usual caveats about TCO randomly not working are due to constraints imposed by preexisting ABIs or VMs; if you don’t need to care about those, then the whole thing is quite straightforward.)


One thing I ran into recently when I played around with passkeys is the problem of orphaned keys. Basically if I log into a website using the passkey and then go to my account settings and remove that passkey then log out I have a problem. Now I can’t sign in but when I go to recover my account iOS/macOS will refuse to create a new passkey because one already exists for this website. So I have to go to my passwords list and manually remove it. I believe I was correctly using the JS API for signaling orphaned keys but the OS still wouldn’t remove it so it was a situation of having to educate the user to remove the orphaned key manually (and hoping the user doesn’t get confused and remove the wrong key). You also apparently can’t create more than one passkey for the same username and the same website. So if I initially create an account from my MacBook and the passkey gets listed as “MacBook”, I then go to log in from my iPhone and it still uses the “MacBook” passkey because of iCloud sync. But this is confusing because I cannot have an iPhone key.

Overall it’s not terrible but I think these edge cases are going to keep biting people and need to be addressed in some way. And yes I understand that I could use a Yubikey or Bitwarden or some such but the point was that I wanted to see how this flow works for “normal” users who just use the iCloud Keychain and the experience leaves something to be desired.


  > So if I initially create an account from my MacBook and the passkey gets listed as “MacBook”, I then go to log in from my iPhone and it still uses the “MacBook” passkey because of iCloud sync. But this is confusing because I cannot have an iPhone key.
Now try using a Windows or Linux computer...

This is why I strongly prefer to not use OSX passkeys. How the fuck am I supposed to login on my nix machines if you only allow me to enroll one passkey?!


Which Linux? And are you saying Windows an Linux options are better or worse?

I more mean being someone that works in multiple ecosystems.

But FWIW, I have the least friction with Linux. But that's more that Windows and Apple have their walled gardens and that's where the friction comes from, though in different ways.


Why would a website leave you with an account but no way to log in aside from the account recovery procedure?

You register from your MacBook, then add your Android phone, then remove your MacBook key, the lose your Android phone.

The messed up thing is that the simplest backup option is a magic login link which is obviously less secure. Also you cannot sink a passkey between platforms unless you use a third party Authenticator so you have to have a backup method of some sort even if not for recovery reasons.


Aside from video, audio compatibility is tricky as well. You can do AAC stereo and most things support that but AAC 5.1 seems to be supported by only some devices so all my video files end up getting stereo AAC, 5.1 AAC, and 5.1 DTS to avoid live transcoding.

Exactly. What’s worse is that if you have something like a web service that calls an external API, when that API goes down your log is going to be littered with errors and possibly even tracebacks which is just noise. If you set up a simple “email me on error” kind of service you will get as many emails as there were user requests.

In theory some sort of internal API status tracking thing would be better that has some heuristic of is the API up or down and the error rate. It should warn you when the API is down and when it comes back up. Logging could still show an error or a warning for each request but you don’t need to get an email about each one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: