Hacker Newsnew | past | comments | ask | show | jobs | submit | Mizza's commentslogin

Would love to see them do another OSX 10.6 and just release a version with lots bug fixes and no new features. But instead it'll be a new half-baked LLM tool to help you make new half-baked LLM tools.


Never found it hard to build an oscillator, the hard part is musical voltage per octave. 3340 repro chips are the way to go, the best non-3340 circuit I've seen is this one and it's still temperature-sensitive: https://www.youtube.com/watch?v=FiCMjt0mqvI


Temperature sensitivity only matters in polysynths where you don't have easy access to per-oscillator tuning. It is not difficult to build an oscillator with better pitch stability than a guitar, even my VCOs with no temperature compensation require less tuning than any guitar I have owned.


But that's for synths where the oscillators are ICs. I'm talking about simple oscillators from basic components


My VCOs which lack temperature compensation are oscillators built from basic components. The closest I have to oscillator on an IC these days, are the VCOs in my Moog Prodigy which use a quad OPAmp and a 3086 transistor array, it is far more stable than any guitar string.


3340s are great. I've also heard good things about the SSI2130 and 2131 chips as a more modern alternative to 3340.

3340s are more DIY-friendly though, as they're DIP packages.


Memory loss from sleep deprivation is an evolutionary advantage. If you remembered how rough the first few months of new children are, you wouldn't do it again.


This looks like dogshit.

I've seen a dozen frontpage posts this week that were nothing more than "look at this garbage Claude made for me." Can we get some new moderation rules to prevent slop on HN?


Reinventing the wheel and then charging for simple features?

If this paradigm excites you, just use Phoenix, dawg. It's so far ahead, everything else feels primitive.


What here has been reinvented? There's nothing out there like datastar. Htmx + alpine is similar, but much heavier and less functional.

And God forbid someone try to make open source sustainable by charging for largely-unnecessary functionality and actively dissuading you from buying it - as the devs do regularly in their discord.

And phoenix doesn't work with ANY backend language or framework.


Simple features? Making those imperative APIs declarative is not very simple for me, but you're welcome to not use those features and write them yourself.

A couple of things on the Phoenix point:

- Requires the adoption of Elixir and Datastar is backend agnostic - Adopting Phoenix feels more suited to greenfield projects, but Datastar is suited for that and brownfield ones. - Websockets vs Server Sent Events has been really interesting and nuanced


In what specific areas Phoenix Live View is so far ahead? Do you mind elaborating?

The unfortunate disadvantage of Live View is that you need to write Elixir. A lovely language, but it would be hard to sell in company that use only <SOME_LANGUAGE>. The hypermedia libraries like d* and htmx can be used with any backend.


I can't use Phoenix with Rails now, can I?


Came here to say this. I use fastmail am quite happy with it because I just want a reliable inbox and nothing else. Just keep it running and don't touch anything else.


Has Medium stopped working on Firefox for anybody else? Once the page is finished loading, it stops responding to scroll events.


No idea, if I see a medium link I just ignore it. Substack is heading the same way for me too, it seems to be self-promotion, shallow-takes, and spam more than anything real.


The page loads a "subscribe to author" modal pretty quickly after the page loads. You may have partially blocked it, so you won't see the modal but it still prevents scroll.


Same here. Meanwhile I close a link/page as soon as I realize it's on medium.


Maybe you have an ad-blocker that just hides the popup but does not restore scrolling (scrolling is usually prevented when popups are visible)


Firefox has a lot of weird little pop up ads these days. It seems like this is a very recent phenominon. Is this actually Firefox doing this or some kind of plug-in accidentally installed?


Hm, I haven't seen that. Perhaps it's worth reviewing your plugins


Thanks! I think it might have been notifications from futurism.com. I don't remember visiting that site or allowing notifications (on purpose anyway).


I avoid medium where possible.

If I could pipe text content to my terminal with confidence, I would.


Same. Hit escape shortly after the page loads to stop loading whatever modal is likely blocking scroll. I don't see the modal so it's likely blocked by ublock, but still stops scroll.


Seems ok for me on Firefox 143.0.1


Have you tried using reader mode?


WashPo article on this was interesting: https://www.washingtonpost.com/health/2025/09/22/leucovorin-...

Seems like it works only for a very specific type of childhood autism, but if my child had this I would be kicking down doors to get it. The article has some good insight into how honest researchers feel about their work being trumpeted by the scientifically illiterate carnival barkers in charge of things.



What's the path to recouping that money?

Even if every major company in the US spends $100,000 a year on subscriptions and every household spends $20/month, it still doesn't seem like enough return on investment when you factor in inference costs and all the other overhead.

New medical discoveries, maybe? I saw OpenAI's announcement about gpt-bio and iPSCs which was pretty amazing, but there's a very long gap between that and commercialization.

I'm just wondering what the plan is.


Wasn't the plan AGI, not ROI on offering services based on current gen AI models. AGI was the winner takes all holy grail, so all this money was just buying lottery tickets in hopes of striking AGI first. At least that how I remember it, but AGI dreams may have been hampered by lack of exponential improvement in last year.


I’m sure somebody believed that? But I never met them.


> I’m sure somebody believed that?

“Somebody” like… Sam Altman? Because he said that’s what he actually believes.

https://www.startupbell.net/post/sam-altman-told-investors-b...


As sibling commentor mentions, Zuckerberg is dropping billions on AGI currently (or "super human intelligence", whatever the difference is). And, I don't have time to find it, but maybe Sam Altman might've said AGI is the ultimate goal at somepoint - idk, I don't pay too much attention to this stuff tbh, you'll have to look it up if you're interested.

Oh and John Carmack, of Doom fame, went off to do AGI research and raised a modest 20(?) million last I heard.


I want to say Mark Zuckerberg but I think Meta's investment is also targeted at creating their own social media content


The "game plan" is, and always was, to target human labor. Some human labor is straight up replaceable by AI already, other jobs get major productivity boosts. The economic value of that is immense.

We're not even at AGI, and AI-driven automation is already rampaging through the pool of "the cheapest and the most replaceable" human labor. Things that were previously outsourced to Indian call centers are now increasingly outsourced to the datacenters instead.

Most major AI companies also believe that they can indeed hit AGI if they sustain the compute and the R&D spending.


If AI Doesn’t Fire You, It Can’t Pay For Itself https://esborogardius.substack.com/p/if-ai-doesnt-fire-you-i...


If LLMs could double the efficiency of white collar workers, major companies would be asked for far more than $100,000 a year. If could cut their expensive workforce in half and then paid even 25% of their savings it could easily generate enough revenue to make that valuation look cheap.


Unfortunately for the LLM vendors, that's not what we're seeing. I guess that used to be the plan, and now they're just scrambling around for whatever they can manage before it all falls apart.


Ok but then another AI company would just offer the same thing at a lower cost.


How much lower though?


It'll keep going down until it's hardly over cost price.


$100k/year is literally nothing.

Think of it as maybe $10k/employee, figuring a conservative 10% boost in productivity against a lowball $100k/year fully burdened salary+benefits. For a company with 10,000 employees that’s $100m/year.


That's literally not how the word "literally" works.



That’s literally how the English language works. It literally evolves


It doesn't proliferate new forms without selection, that's not how evolution works.

(Heh, I see "proliferate" itself is a back-formation.)


Even at $10k/yr/employee, you'd need 30 million people on the 10k/yr plan to hit 300B ARR. I think that's a hell of a big swing. 3 million, recoup over ten years? Maybe, but I still don't think so. And then competition between 4 or 5 vendors, larger customers figuring out it's cheaper to train their own models for one thing that gives them 90% of the productivity gains, etc.

But rather than speculating, I'm generally curious what the companies are saying to their investors about the matter.


I don’t get why you need 300B arr?


But we won’t get there unless the company integration failure rate falls below 95%


Eh, seems likely to me existing companies are structured for human labor in a way that's hard to really hard to untangle — smart individuals can level up with this stuff, but remaking an entire company demands human-level AI (not there yet) or a mostly AI-fluent team (working with/through AI is a new skill and few workers have developed it).

New co's built by individuals who get AI are best positioned to unlock the dramatic effects of the technology, and it's going to take time for them to eclipse encumbent players and then seed the labor market with AI-fluent talent


They just have to be positive revenue on inference and run it for a long time. Why do you think they can’t recoup it?


Major companies will spend 10-100x that if it resulted in real tangible productivity gains for their businesses.


I think it's "scam everyone into giving us lots of money, then run before the bills come".


Until there's a paradigm shift and we get data and instructions in different bands, I don't see how it can get better over time.

It's like we've decided to build the foundation of the next ten years of technology in unescaped PHP. There are ways to make it work, but it's not the easiest path, and since the whole purpose of the AI initiative seems to be to promote developer laziness, I think there are bigger fuck-ups yet to come.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: