Hacker Newsnew | past | comments | ask | show | jobs | submit | blibble's commentslogin

only for the remaining part of the decade, if the US is lucky

trump pissing away a century of hard-won soft power handed the century to China


I very much doubt it

> Obviously we can put more weight behind the words a person says if they’ve proven themselves trustworthy in prior areas - and we should!

no, you shouldn't

this is how you end up with crap like vaccine denialism going mainstream

"but he's a doctor!"


Credentialism isn't a fix for the problem you've outlined. If anything, over-reliance on credentials bolsters and lends credence to crazy claims. The media hyper-fixates on it and amplifies it.

We've got Avi Loeb on mainstream podcasts and TV spouting baseless alien nonsense. He's a preeminent in his field, after all.

Focus on what you understand. If you don't understand, learn more.


> There are two striking aspects of this rejection of EU bureaucracy. First, in comparison with other, comparable entities, such as the US federal bureaucracy, the EU’s administrative apparatus has a marginal size. Specifically, the EU, which is responsible for more than 440 million citizens, employs only around 60,000 people, while the US federal bureaucracy has more than two million employees that govern a territory with about 330 million inhabitants.

that's because the EU co-opted existing member state agencies instead of creating its own

e.g. the german federal department of agriculture effectively is controlled by the EU (almost all of its duties are an EU competence), but 100% of its costs are attributed to germany

this makes the EU look much more efficient than it is


It makes them lool as efficient as they actually are. Being able to use existing infrastructure is good.

it does seem as if the world has gone insane

we have brilliant machines that can more or less work perfectly

then the scam artists have convinced people that spending a trillion dollar and terawatts to get essentially a biased random number generator to produce unusable garbage is somehow an improvement


These models have turned a bunch of NLP problems that were previously impossible into something trivial. I have personally built extremely reliable systems from the biased random number generator. Our f-score using "classic" NLP went from 20% to 99% using LLMs.

NLP, natural language processing for the unfamiliar. LLMs are tailor made for this work particularly well. They're great tokenizers of structured rules. Its why they're also halfway decent at generating code in some situations.

I think the fall down you see is in logical domains of that rely on relative complexity and contextual awareness in a different way. I've had less luck, for example, having AI systems parse and break down a spreadsheet with complex rules. Thats simply recent memory


when has anything of value been posted on twitter?

redis uses the copy-on-write property of fork() to implement saving

which is elegant and completely legitimate


How does fork() work with vm.overcommit=2?

A forked process would assume memory is already allocated, but I guess it would fail when writing to it as if vm.overcommit is set to 0 or 1.


I believe (per the stuff at the bottom of https://www.kernel.org/doc/Documentation/vm/overcommit-accou... ) that the kernel does the accounting of how much memory the new child process needs and will fail the fork() if there isn't enough. All the COW pages should be in the "shared anonymous" category so get counted once per user (i.e. once for the parent process, once for the child), ensuring that the COW copy can't fail if the fork succeeded.

As pm215 states, it doubles your memory commit. It's somewhat common for large programs/runtimes that may fork at runtime to spawn an intermediary process during startup to use for runtime forks to avoid the cost of CoW on memory and mapppings and etc where the CoW isn't needed or desirable; but redis has to fork the actual service process because it uses CoW to effectively snapshot memory.

It seems like a wrong accounting to count CoWed pages twice.

Not if your goal is to make it such that OOM can only occur during allocation failure, and not during an arbitrary later write, as the OP purports to want.

It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten. In that case, even though the cache is fixed size / no new allocations, all of the pages are touched and so the total used memory is double from before the fork. If you want to guarantee allocation failures rather than demand paging failures, and you don't have enough ram/swap to back twice the allocations, you must fail the fork.

On the other hand, if you have a pretty good idea that the child will finish persisting and exit before the cache is fully rewritten, double is too much. There's not really a mechanism for that though. Even if you could set an optimistic multiplier for multiple mapped CoW pages, you're back to demand paging failures --- although maybe it's still worthwhile.


> It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten.

It's wrong 99.99999% of the time. Because alternative is either "make it take double and waste half the RAM" or "write in memory data in a way that allows for snapshotting, throwing a bunch of performance into the trash"


Can you elaborate on how this comment is connected to the article?

did you read it the article? there's a large section on redis

the author says it's bad design, but has entirely missed WHY it wants overcommit


You haven't made a connection, though. What does fork have to do with overcommit? You didn't connect the dots.

If you turn overcommit off then when you fork you double the memory usage. The pages are CoW but for accounting purposes it counts as double because writes could require allocating memory and that's not allowed to fail since it's not a malloc. So the kernel has to count it as reserved.

why is it always accounts with 50 karma saying this?

I have 22k karma and I think it's a trivial claim that LLMs work and that software is clearly on the cusp of being 100% solved within a couple years.

The naysaying seems to mostly come from people coping with the writing they see on the wall with their anecdote about some goalpost-moving challenge designed for the LLM to fail (which they never seem to share with us). And if their low effort attempt can't crack LLMs, then nobody can.

It reminds me of HN ten years ago where you'd still run into people claiming that Javascript is so bad that anybody who thinks they can create good software with it is wrong (trust them, they've supposedly tried). Acting like they're so preoccupied with good engineering when it's clearly something more emotional.

Meanwhile, I've barely had to touch code ever since Opus 4.5 dropped. I've started wondering if it's me or the machine that's the background agent. My job is clearly shifting into code review and project management while tabbing between many terminals.

As LLMs keep improving, there's a moment where it's literally more work to find the three files you need to change than to just instruct someone to do it, and what changes the game is when you realize it's creating output you don't even need to edit anymore.


> It reminds me of HN ten years ago where you'd still run into people claiming that Javascript is so bad that anybody who thinks they can create good software with it is wrong (trust them, they've supposedly tried). Acting like they're so preoccupied with good engineering when it's clearly something more emotional.

Curiously enough, those people are still around and writing good software without javascript. And I say that as someone who generally enjoys modern JS.

> Meanwhile, I've barely had to touch code ever since Opus 4.5 dropped. I've started wondering if it's me or the machine that's the background agent. My job is clearly shifting into code review and project management while tabbing between many terminals.

Why not cut out the middleman and have Opus 4.5 do the code review and project management too?


wasn't sure if this was sarcasm until this point:

> with their anecdote about some goalpost-moving challenge designed for the LLM to fail (which they never seem to share with us).

literally what the boosters do on every single post!

"no no, the top model last week was complete dogshit, but this new one is world changing! no you can't see my code!"

10/10 for the best booster impression I've seen this year!


If we're going to argue on that level: Maybe it's because accounts with 12k karma spend more time posting than working on side projects and trying new tools.

that's the great thing about non-vibe coding

faster, fewer bugs, better output

leaving more time for shitposting


the guns don't even seem to work for their supposed primary purpose

uninvited and unwanted federal troops are roaming around cities against the wishes of state governors

so seems the US has the worst of both worlds: unqualified morons owning assault weapons, plus the tyranny


I wonder how much of that $80 million is garbage code like safe_sleep.sh

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: