Hacker Newsnew | past | comments | ask | show | jobs | submit | dumbmrblah's commentslogin

But before I needed to be a programmer or have a team of data analysts analyze the data for me, now I can just process that data on my own and gather my own insights. That was my aha moment.

One thing to consider is that this version is a new architecture, so it’ll take time for Llama CPP to get updated. Similar to how it was with Qwen Next.

Apparently it is the same as the DeepseekV3 architecture and already supported by llama.cpp once the new name is added. Here's the PR: https://github.com/ggml-org/llama.cpp/pull/18936

has been merged

Spokenly is my go-to app on iOS for transcription as well.

I worry this is gonna cause even more sensitive/privilaged data extrafiltration than currently is happening. And most “normies” won't even notice.

I know the counterargument is people are already putting in company data via ChatGPT. However, that is a conscious decision. This may happen without people even recognizing that they are “spilling the beans”.


This hit the front page yesterday so you may have seen it, but figured I'd post for posterity sake

> Claude Cowork exfiltrates files https://news.ycombinator.com/item?id=46622328


I think you're right, but the issue goes deeper. If the productivity gains are real, the incentive to bypass security becomes overwhelming. We are going to see a massive conflict where compliance tries to clamp down, but eventually loses to 'getting work done.'

Even if critics are right that these models are inherently insecure, the market will likely settle for 'optically patched.' If the efficiency gains are there, companies will just accept the residual risk.


Claude (generally, even non Cowork mode) is vulnerable to exfil via their APIs, and Anthropic's response was that you should click the stop button if exfiltration occurs.

This is a good example of the Normalization of Deviance in AI by the way.

See my Claude Pirate research from last October for details:

https://embracethered.com/blog/posts/2025/claude-abusing-net...


I just set this up today. I had Whispering app set up on my Windows computer, but it really wasn't working well on my Ubuntu computer that I just set up. I found Handy randomly. It was the last app I needed to go Linux full-time. Thank you!

Grim reminder that we put our faith in algorithms run by people who think "move fast and break things" includes human brains.

OpenAI claims the bot was just a passive "mirror" reflecting the user's psychosis, but they also stripped the safety guardrails that prevent it from agreeing with false premises just to maximize user retention. Turns out you're arming the mentally ill with a personalized cult leader.


Great! It'll be SOTA for a couple of weeks until the quality degrades due to throttling.

I'll stick with plug and play API instead.


Due to the "Code Red" threat from Gemini 3, I suspect they'll hold off throttling for longer than usual (by incinerating even more investor capital than usual).

Jump in and soak up that extra-discounted compute while the getting is good, kids! Personally, I recently retired so I just occasionally mess around with LLMs for casual hobby projects, so I've only ever used the free tier of all the providers. Having lived through the dot com bubble, I regret not soaking up more of the free and heavily subsidized stuff back then. Trying not to miss out this time. All this compute available for free or below cost won't last too much longer...


I've been using tools like ProxLLM which just slam these AI models via proxy everytime a free tier limit is hit and it works great.


can you provide a link to this tool, a search for proxllm didn't seem to find anything related.


Anybody else surprised that GoDaddy has so much and Porkbun has so few?

Goes to show that the that Reddit/HN hivemind isn’t representative of what is happening in reality.


It’s actually about $600 just for the surgery. It’s about 10 RVUs x $60 per RVU. You add some RVU modifiers to get it to about $1000.

Your point still stands, but it’s still a bit more than $100

Source: I’m a MD


I agree. As a physician, this is sticking out to me as bad / dangerous advice. By getting unneeded regular CT scans, you’re dramatically increasing your risk of developing cancer. Beyond the radiation exposure itself, there is also the very real possibility of incidental findings that can lead to further testing, invasive biopsies, and unnecessary interventions, all of which compound your overall risk. You might solve one problem, but you’ve just guaranteed a much bigger, more explosive one down the line.


You won't die of heart diesease if you die of cancer first. So I guess it sortof checks out, but not what I would choose.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: