Docker desktop has a pretty nice sandbox feature that will also store your CC (and other) credentials, so you don't have to re-auth every time you create a new container.
Funnily enough, we shipped the Docker Desktop VM a decade ago now (experience report at https://dl.acm.org/doi/10.1145/3747525). The embedded VM in DD is much more stripped down than the one in Claude Cowork (its based on https://github.com/linuxkit/linuxkit), and its more specialised to container workloads rather than just using bubblewrap for sandboxing (system services run in their own isolated namespaces).
Given how many products seem to be using this shipping-Linux-as-a-library-VM trick these days, it's probably a good time for an open source project to step up to supply a more reusable way of assembling this layer into a proper Mac library...
This is one of those announcements that actually just excites me as a consumer. We give our children HomePods as their first device when they turn 8 years old (Apple Watch at 10 years, laptop at 12) and in the 6 years I have been buying them, they have not improved one ounce. My kids would like to listen to podcasts, get information, etc. All stuff that a voice conversation with Chatgpt or Gemini can do today, but Siri isn't just useless-- it's actually quite frustrating!
> Being these things are at their core probability machines, ... How? Why?
Is Siri a probability machine? I didn't think it was an LLM at all right now? I thought it was some horrendous tree of switch statements, hence the difficulty of improving it.
Apple search is comically bad, though. Type in some common feature or app, and it will yield the most obscure header file inside the build deps directory of some Xcode project you forgot existed.
Not exactly the same, but kinda: my gen 1 Google Home just got Gemini and it finally delivers on the promise of like 10 years ago! Brought new life to the thing beyond playing music, setting timers, and occasionally asking really basic questions
It remains to be seen what the existing HomePods will support. There’s been a HomePod hardware update in the pipeline for quite some time, and it appears like they are waiting for the new Siri to be ready.
it's not going to help them. For Siri to be really useful it wouldn't need deep system integration and an external model is not going to provide that. People don't believe me when I said it about Apple Intelligence with open AI
I am currently employing a consultant for something. It's something I don't want to do myself and they are doing what I need, but it's so painfully obvious they are just vanilla ChatGPTing everything it's almost funny at this point.
Any IDE based editor feels like a stopgap to me. We may not be there yet, but I feel that in the future a "vibe coder" isn't even going to look at much code at all. Much of what developers who are relying on Cursor, Windmill, Replit, etc etc are doing is performative as it relates to code. There is just a lot of copy/pasting of console errors and asking for things one way or another.
Casual or "vibe" coding is all about the output. Doesn't work? Roll back. Works well? Keep going. Feeling gutsy? Single shot.
Vibe coding is just a prototyping tool / "dev influencer" gimmick. No one serious is using Cursor for vibe coding, nor will anyone serious ever vibe code. It's for AI assisted development-- in other words, a more powerful intellisense.
I vibed this puzzle game into existence with two breaks* from vibe coding midway through to get it out of a rut: https://love-15.com/
It builds for PC, web, iOS and Android.
It's a simple sliding block puzzle game with a handful of additional game mechanics which you can see if you go into settings to unlock all levels, saved progress and best times/move counts, a level editor, daily puzzles with share results, and theme selection.
I think I found the current limits of vibe coding. There's one bug that I know of which I don't think can be fixed with vibe coding, and so I haven't fixed it as this was largely an experiment to see how far you could get with vibe coding.
I've since inspected the code and I believe the code is just too bad for the LLM to get anywhere at this point. Looking at the git history - I had it commit every time a feature was complete and verified working by me - the code started OK but really went downhill as it got bigger, and it got worse faster over time.
(When I first broke from vibe coding it was hitting a brick wall on progress earlier than expected and I needed to guide it to break the project up into more files, which it is terrible at by the way; I think the one giant file was hitting context length limits, which were smaller at the time than they are now. The second break was at the end to get it over the finish line when it just could not fix some save bugs without introducing new ones, and I did just barely enough technical guidance to help it finish. In neither case did I write code, but I did read code in both cases.)
I felt the same way for a while, but I am really not so sure now. Cursor is definitely drawing on the influencer/growth well to drive some portion of these #s.
It's a lot easier and more scaleable to get 1000 people "vibe coding" than it is to get 10 experienced engineers using you for autocomplete.
Cursor isnt for vibe coding. I use it. I ask the AI to do something I know how to do but it can do it faster. I check the changes to make sure everything looks good.
But this sums up so well why I think the valuation is so riskily high. You're saying that right now IDE UX is so slow and bad that often there are changes you know how to make but it would literally just be too many keystrokes for you to want to do yourself.
As far as I can tell if people like you just had a way to express code ideas with fewer keystrokes, a lot of Cursor's market would pretty much just dry up.
I am currently dealing with a relatively complex legal agreement. It's about 30 pages. I have a lawyer working on it who I consider the best in the country for this domain.
I was able to pre-process the agreement, clearly understand most of the major issues, and come up with a proposed set of redlines all relatively easily. I then waited for his redlines and then responded asking questions about a handful of things he had missed.
I value a lawyer being willing to take responsibility for their edits, and he also has a lot of domain specific transactional knowledge that no LLM will have, but I easily saved 10 hours of time so far on this document.
I am a semi-retired blue collar electrician. Higher IQ, but lowly certifications (definitely not a lawyer).
Currently I have initiated a lawsuit in my US state's small claims civil court, over a relatively simple payment dispute. Without the ability to bounce legal questions/tact/procedurals off of Perplexity, I wouldn't have felt comfortable enough to represent myself in court.
Even if I were to need a lawyer on this simple case, the majority of the "leg work" has already been completed by free, non-pay LLMs.
My court date is early June; I'm both nervous and excited (for restitution)!
----
I have a judge brother and have been arguing for years that law clerking is probably in its last gasps of career-entry; Chief Justice Roberts's end of 2023 SCOTUS report was a refreshing read to share among family members (which argued that LLMs will provide more accessibility to judiciary by commoners).
Personally, I already would rather have a jury of LLMs deciding most legal outcomes (albeit would need to be impartially programmed, if that's even possible). Definitely would make for better democratic accessibility.
I found Bruce Schneier's recent article "Reimagining Democracy" [1] quite an interesting thought experiment (which is about his hosting intellectuals in their discussions of creating entirely new democracies utilizing modern technologies). It'd be super fair if a trusted AI government could lead to better democracies than "modern capitalism" can / has.
This is super interesting, because I've been in similar conflicts (as a renter trying to recover security deposits in court) and been screwed over by the lawyers I've retained (like, literally not even showing up in court) and I probably could have done all this with an LLM myself. When the stakes are low, why not?
I've also had a lawyer not show up (me as criminal defendant), and then try to fleece me for more money (than initially agreed) because we (I) had to reschedule my court date — only to eventually reach a simple plea agreement which any public defender could have secured. LLMs didn't exist when this occurred, well over a decade ago.
>similar conflicts (as a renter trying to recover security deposits in court)
This is basically my current scenario. LL sold the rental I was living in, which I had pre-paid for an entire year, because the septic tank went out. We mutually agreed to end our lease... he then wrote me a check for overpayment... he then canceled the check (without even telling me). As an added bonus, he tried nothing to fix the tank... then sold the disaster to somebody else (I found out only when the new owner showed up on my/his doorstep).
Not my first time in court, but is my first time as Plaintiff. I'm very excited to (potentially) get awarded TREBLE DAMAGES on my few-thousand-dollar initial claim/dispute.
The era of "A lawyer that represents himself has a fool for his client" are rapidly approaching end, particularly within small claims civil courts. I'd love to see entire branches of government replaced with machine-learnt judges.
I've already decided that if the Defendant (in my case) chooses to appeal to our higher court (i.e. not small claims, which he is entitled to do) I will retain an attorney, only because civil procedure is so nuanced.
But I'm trying first, and most of the legwork is already formulated.
I think it's the small TPM limits. I'll be way under the 10-30 requests per minute while using Cline, but it appears that the input tokens count towards the rate limit so I'll find myself limited to one message a minute if I let the conversation go on for too long, ironically due to Gemini's long context window. AFAIK Cline doesn't currently offer an option to limit the context explosion to lower than model capacity.
I just had this experience. I had a 200ft run between my house and barn. The original builder put a direct bury ethernet between the two and it failed. I dug a trench, put in a conduit, pulled 2 fibre lines and left a pull string in.
I recently had the primary fibre fail and am now on the backup. If I need to pull new ones in the future I can do that pretty easily through the conduit.
Yep. Direct-burial ethernet is surprisingly vulnerable to nearby lightning strikes. It's not a matter of IF the cable or devices get damaged, it's a matter of when. Nearby (not even direct) lightning induces ground voltage potentials between buildings to the tune of hundreds of volts or more.
Do you have experience or information on direct burial ethernet for something like a POE camera? I'd like to put one on the back fence to watch the back of the house and yard. Direct burial in the back yard would be a plenty easy thing to do, but the cable is pricey enough that I've held off for now.
I used to consult for an ISP that put direct bury in a region where it snows 1/3rd of the year.
It was profit generating. They would offer to put conduit in for an extra fee, the customers always said no, then they would be back to install conduit and cable in the spring after the ice had killed the cable.
Woah, this is surprising. Do you know the root cause? I could imagine copper cabling is much more sensitive to the outdoors, that is why I am so surprised about your fibre failure.
reply