The real problem was pre-Windows XP. Anyway, just because you failed your assignment doesn’t mean it wasn’t a real problem. You should probably trust actual IT administrators over your experience as a college student.
This stat beggars belief. I think the headline is phrased incorrectly, and the overall stat is misleading. The actual stat is only from dead drivers who were tested for THC.
Researchers analyzed coroner records from
Montgomery County in Ohio from January 2019
to September 2024, focusing on 246 deceased
drivers who were tested for THC following a
fatal crash. When autopsies are performed,
drug screening is typically part of the
process.
The unanswered and unaddressed questions here are, how often and why were the THC tests administered? The article says that’s standard for autopsies. But how often are autopsies conducted on deceased drivers? I would be truly surprised if it’s 100%. In fact, I would expect it to happen only in cases where there was some suspicion of intoxication. In which case, this finding isn’t very surprising after all.
I wish the CEO of Mozilla could have stated the commitment a little more strongly than “it feels off-mission”. Privacy, user control, and security of the web browsing experience are (or should be) the CORE of Mozilla's mission. This isn’t a decision to take lightly on vibes. Allowing ad-blockers (or any content manipulation plugins users want) should be a deep commitment.
The EU is just about building up to the task of proper self-defence against Russia, and China is not at all interested in a world order except as it feels necessary to protect domestic order. So they do things like border-pushing against India, the nine-island line, wars of words with Japan, and surveillance of overseas Chinese nationals, but other than that they are a long way from anything like the European colonial or world war era.
Nah, AI is a massive money pit, and Starship has proven to be a dead end. The real tech revolutions of the moment are in energy production and storage and health care advances like mRNA and GLP-1. The US is actively self-sabotaging on both energy and health care at the moment.
The top 8 are American and 17 of the top 20. Seven of the top 8 are into AI. I'm not convinced it's going nowhere. Even if AI on it's own isn't profitable, companies like Nvidia, Apple and Google are doing fine.
They are decimating academic research into health care tech in general. There’s been mixed messaging from the administration on GLP-1s (although if they keep pushing on the eugenics theme, they will solidify as anti on those as well soon enough) but that wasn’t really the point. I named mRNA and GLP-1s as two examples of modern tech revolutions that are not AI or space-related. Those are the modern tech breakthroughs, not AI and definitely not space launches. (I went back and edited the post to make it clearer what I meant by “both”).
Wegovy and Zepbound have not been covered by Medicare for weight loss, “and they’ve only rarely been covered by Medicaid,” Trump said in the Oval Office. “They’ve often cost consumers more than $1,000 per month, some a lot more than that. ... That ends starting today."
"“This is the biggest drug in our country, and that’s why this is the most important of all the [most favored nation] announcements we’ve made,” Health and Human Services Secretary Robert F. Kennedy Jr. said during the briefing. “This is going to have the biggest impact on the American people. All Americans, even those who are not on Medicaid, Medicare, are going to be able to get the same price for their drugs, for their GLP-1s.""
Honestly I’ve seen plenty of folks get promoted to “team lead” because they aren’t as productive with the actual coding. Someone needs to focus on the non-technical project tasks, so the boss picks the least productive team member to move to that role. Calling it a “team lead” makes it more appealing than calling it “worst coder”.
A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.
This is a marketing press release about two industries that are not particularly trustworthy in their claims. I would not out any stock in any assertion made within.
I think that's just true in general. Business users at $work are already saying that they would rather just talk to ChatGPT (with voice for some reason I, a keyboard person, doesn't understand) than deal with GUIs. They want to describe what they need and have the computer do it, not click around.
Once you've abstracted away the UI (and the training on how to use it) it will be a lot easier to just swap one SaaS for another.
Yes, except for the fact that any non-trivial saas does non-trivial stuff that an agent will be able to call (as the 'secretary') while the user still has to pay the subscription to use.
Yes, but now it's easier for other SaaS to compete on that, because they don't get to bundle individual features under common webshit UI and restrict users to whatever flows the vendor supports. There will be pressure to provide more focused features, because their combining and UI chrome will be done by, or on the other side of, the AI agent.
Also, having to retrain users to use a new shitty UI after they got used to the previous shitty UI is a major moat of many SaaS services. The user doesn't care about the web portal, they just want to get work done. Switching to a different web portal needs to be a big net positive because users will correctly complain that now they are unproductive for a while because the quirks and bugs of the previous SaaS don't match those of the new SaaS.
In a world where the interface is "you talk to the computer" you will be able to swap providers way more easily.
That's the brilliance of AI - it doesn't matter if the product actually works or not. As long as it looks like it works and flatters the user enough, you get paid.
And if you build an AI interface to your product, you can make it not work in subtly the right ways that direct more money towards you. You can take advertising money to make the AI recommend certain products. You can make it give completely wrong answers to your competitors.
reply