Would love to see them do another OSX 10.6 and just release a version with lots bug fixes and no new features. But instead it'll be a new half-baked LLM tool to help you make new half-baked LLM tools.
Never found it hard to build an oscillator, the hard part is musical voltage per octave. 3340 repro chips are the way to go, the best non-3340 circuit I've seen is this one and it's still temperature-sensitive: https://www.youtube.com/watch?v=FiCMjt0mqvI
Temperature sensitivity only matters in polysynths where you don't have easy access to per-oscillator tuning. It is not difficult to build an oscillator with better pitch stability than a guitar, even my VCOs with no temperature compensation require less tuning than any guitar I have owned.
My VCOs which lack temperature compensation are oscillators built from basic components. The closest I have to oscillator on an IC these days, are the VCOs in my Moog Prodigy which use a quad OPAmp and a 3086 transistor array, it is far more stable than any guitar string.
Memory loss from sleep deprivation is an evolutionary advantage. If you remembered how rough the first few months of new children are, you wouldn't do it again.
I've seen a dozen frontpage posts this week that were nothing more than "look at this garbage Claude made for me." Can we get some new moderation rules to prevent slop on HN?
What here has been reinvented? There's nothing out there like datastar. Htmx + alpine is similar, but much heavier and less functional.
And God forbid someone try to make open source sustainable by charging for largely-unnecessary functionality and actively dissuading you from buying it - as the devs do regularly in their discord.
And phoenix doesn't work with ANY backend language or framework.
Simple features? Making those imperative APIs declarative is not very simple for me, but you're welcome to not use those features and write them yourself.
A couple of things on the Phoenix point:
- Requires the adoption of Elixir and Datastar is backend agnostic
- Adopting Phoenix feels more suited to greenfield projects, but Datastar is suited for that and brownfield ones.
- Websockets vs Server Sent Events has been really interesting and nuanced
In what specific areas Phoenix Live View is so far ahead? Do you mind elaborating?
The unfortunate disadvantage of Live View is that you need to write Elixir. A lovely language, but it would be hard to sell in company that use only <SOME_LANGUAGE>. The hypermedia libraries like d* and htmx can be used with any backend.
Came here to say this. I use fastmail am quite happy with it because I just want a reliable inbox and nothing else. Just keep it running and don't touch anything else.
No idea, if I see a medium link I just ignore it. Substack is heading the same way for me too, it seems to be self-promotion, shallow-takes, and spam more than anything real.
The page loads a "subscribe to author" modal pretty quickly after the page loads. You may have partially blocked it, so you won't see the modal but it still prevents scroll.
Firefox has a lot of weird little pop up ads these days. It seems like this is a very recent phenominon. Is this actually Firefox doing this or some kind of plug-in accidentally installed?
Same. Hit escape shortly after the page loads to stop loading whatever modal is likely blocking scroll. I don't see the modal so it's likely blocked by ublock, but still stops scroll.
Seems like it works only for a very specific type of childhood autism, but if my child had this I would be kicking down doors to get it. The article has some good insight into how honest researchers feel about their work being trumpeted by the scientifically illiterate carnival barkers in charge of things.
Even if every major company in the US spends $100,000 a year on subscriptions and every household spends $20/month, it still doesn't seem like enough return on investment when you factor in inference costs and all the other overhead.
New medical discoveries, maybe? I saw OpenAI's announcement about gpt-bio and iPSCs which was pretty amazing, but there's a very long gap between that and commercialization.
Wasn't the plan AGI, not ROI on offering services based on current gen AI models. AGI was the winner takes all holy grail, so all this money was just buying lottery tickets in hopes of striking AGI first. At least that how I remember it, but AGI dreams may have been hampered by lack of exponential improvement in last year.
As sibling commentor mentions, Zuckerberg is dropping billions on AGI currently (or "super human intelligence", whatever the difference is). And, I don't have time to find it, but maybe Sam Altman might've said AGI is the ultimate goal at somepoint - idk, I don't pay too much attention to this stuff tbh, you'll have to look it up if you're interested.
Oh and John Carmack, of Doom fame, went off to do AGI research and raised a modest 20(?) million last I heard.
The "game plan" is, and always was, to target human labor. Some human labor is straight up replaceable by AI already, other jobs get major productivity boosts. The economic value of that is immense.
We're not even at AGI, and AI-driven automation is already rampaging through the pool of "the cheapest and the most replaceable" human labor. Things that were previously outsourced to Indian call centers are now increasingly outsourced to the datacenters instead.
Most major AI companies also believe that they can indeed hit AGI if they sustain the compute and the R&D spending.
If LLMs could double the efficiency of white collar workers, major companies would be asked for far more than $100,000 a year. If could cut their expensive workforce in half and then paid even 25% of their savings it could easily generate enough revenue to make that valuation look cheap.
Unfortunately for the LLM vendors, that's not what we're seeing. I guess that used to be the plan, and now they're just scrambling around for whatever they can manage before it all falls apart.
Think of it as maybe $10k/employee, figuring a conservative 10% boost in productivity against a lowball $100k/year fully burdened salary+benefits. For a company with 10,000 employees that’s $100m/year.
Even at $10k/yr/employee, you'd need 30 million people on the 10k/yr plan to hit 300B ARR. I think that's a hell of a big swing. 3 million, recoup over ten years? Maybe, but I still don't think so. And then competition between 4 or 5 vendors, larger customers figuring out it's cheaper to train their own models for one thing that gives them 90% of the productivity gains, etc.
But rather than speculating, I'm generally curious what the companies are saying to their investors about the matter.
Eh, seems likely to me existing companies are structured for human labor in a way that's hard to really hard to untangle — smart individuals can level up with this stuff, but remaking an entire company demands human-level AI (not there yet) or a mostly AI-fluent team (working with/through AI is a new skill and few workers have developed it).
New co's built by individuals who get AI are best positioned to unlock the dramatic effects of the technology, and it's going to take time for them to eclipse encumbent players and then seed the labor market with AI-fluent talent
Until there's a paradigm shift and we get data and instructions in different bands, I don't see how it can get better over time.
It's like we've decided to build the foundation of the next ten years of technology in unescaped PHP. There are ways to make it work, but it's not the easiest path, and since the whole purpose of the AI initiative seems to be to promote developer laziness, I think there are bigger fuck-ups yet to come.