The AI integration doesn't add much to the assistant usefulness for me but I do like even the previous Google Assistant way more than I've ever found Siri useful. Apple has just been behind on the digital assistant game for years before the LLM integrations came along.
Not even that. The rare things I use an LLM for I'm not even doing on my phone so there's no use in the integration at all and IIRC you lose some functions the old GA could do that Gemini can't.
Been on an iPhone since the original and mostly until maybe the XS got a new model every other year, since the XS I've been on a 4-5 year cycle, with a 14 Pro to replace my XS.
I also have a Pixel 7 that I use primarily as a home phone stuck on a cheap very few minutes pre-paid plan. I've used it at home on wifi a number of times just to try it out. I think it's... fine? App quality is definitely a negative as I find as a Mac user that some of my favorite developers are better at making iOS apps, or their app is simply not available on Android at all and the alternatives are not nearly as good.
If not for that I probably wouldn't care from a usability perspective, but I still think Apple's focus on privacy and security tend to win out overall. That said, I now carry around a camera with me 90% of the time so I suspect I will be downgrading to a standard iPhone when I upgrade next if the camera carry continues. When I need super pocketable, I use a Ricoh GR, when I need small but great a Fujifilm X100VI, and when I want to go big I have a Sony a7cr full frame camera (still a small camera but FF lenses are much larger than APSC lenses)
I tried carrying a camera around, but honestly I didn't even remember about it most of the time. Taking the photo with the phone was easier, and they already were synced with my digital photo library. I would really like to carry around a camera and switch to a flip phone... but it's not for me unfortunately.
If that's what's disappointing you about the iPhone, it's WAY worse on Android. Google were the ones that pushed me to switch to an iPhone after almost a decade on Android. Stuff not "innovating" is a feature in a mature, well-designed system. Google does the polar opposite and changes things constantly just for the sake of change (well, it's internally so some PO can get a promotion). Most of their UI changes don't even come in app updates, they'll have both interfaces installed on your device but only toggle one on from a server-side flag tied to your account/device, so one day some app will completely change without you having updated anything, then a week later it might just go back to the old UI. They'll deprecate any app or feature you like and replace it with something else, insisting it has the same feature set when it doesn't. Everything feels like a constant beta you're paying to participate in.
The iPhone's new Photos app was a controversial change, but it was so infamous because it's such a rare exception to their standard of changing very little. Open up your Messages app or Settings and it looks and functions basically exactly as it did on the first iPhone in 2007.
What we use it for:
- vulnerability assessments for containers and VMs (they give a list of vulnerable or outdated packages)
- initial access vulnerabilities: what happens if an internet facing component is compromised because you have a vulnerable package and to what kind of data it has access to (it has some regexes and what not to figure out if in your database you have PII data, HIPAA etc.), what lateral movement is possible etc.
- provides information on what you can do to fix a finding
- IAM checks for overly broad permissions
- Service account age and overdue key rotations
Who wants fluent conversations with AI to get help for realtime translations or location based information (e.g. when traveling)?
Who wants to delegate tasks to AI so it can help work with everyday productivity tools like calendar, emails, etc.
I understand that some of the stuff is not there yet. But dismissing an emerging technology as useless baby stuff? Reminds me of what people were saying of web, web 2.0, etc.
My productivity has taken giant leaps since two years, probably because I'm willing to regularly invest some time into understanding & exploring which workflows can be optimised. It might actually not be trivial, and even some AI companies are not able to showcase their tech in realistic problem solving scenarios. But it's there.
Apple is just really really bad at this atm. Their leadership has transformed the company into a mindset of mirco-optimisations, no more taking risks etc.
The entire industry is in a very cart-before-the-horse mode here because this is less "cool new tech being able to mature" and more "frantic economic bubble that happens to manifest through new tech"
New tech tends to start with garage scale startups and targeting the enthusiasts, the experimenters, the hobbyists, the ones willing to put the effort into playing with it, map what is and isn't possible, and file off the rough edges. And when you're making a product like that, you probably have to package and market it entirely differently-- a world of datasheets, programmer's references, and schematics, to give that audience the tools to get the most out of it.
If and when you're lucky, you get to the VisiCalc moment, when someone finds a way to deliver a mainstream value proposition so compelling that people line up waving their Mastercards. There's a 200% chance that value proposition will not be the one you put on the marketing flyer to sell the kit to early-adopters, and it may not even come from the firms who launched the market in the first place.
Apple, Microsoft, and OpenAI are all trying to short-circuit that process. You can't just throw a trillion dollars at a product and shove their early-stage products in front of customers and expect to magically win the future.
It's like trying to make desktop computing happen in 1977 by busting into every house in the country, bolting an Imsai 8080 to random appliances unsolicited, and telling them to enjoy their new computing-enabled future.
> dismissing an emerging technology as useless baby stuff?
But that's not Apple. Apple is the company that watches everyone screw up and then comes out with something brilliantly simple that just works. Not only is Apple Intelligence a mess, a lot of it is vaporware [1].
Siri used to be useful enough that I could ask her to send short texts and place calls. But now, when I ask it to message someone I am regularly in touch with, it spouts out a message to someone with a similar-sounding name I haven't spoken to in years.
> But dismissing an emerging technology as useless baby stuff? Reminds me of what people were saying of web, web 2.0,
Nobody was saying that about the web. The web in 1994 had email and instant messaging out of the box, it replaced letter mail and consumer fax overnight.
We're several years into the GenAI wave, what has it changed for Apple users?
It was Cursor six months ago, VC with Copilot three months ago, and currently it's Windsurf / Cursor. Copilot is lagging with features, it used to be Chat & Compose, now it's Agents and MCP stuff that's missing. But once it's in there, it just seems more robust and better integrated with everything imo.
It would have been great if they had disclosed which products.
I've been building tons of projects with AI lately, and while this is a massive productivity boost, the code itself doesn't scale.
The acceleration when you start with zero is massive, but with a growing code base, AI hits a wall at some point.
You better understand what you've built going from there.
I think you misunderstand the purpose. Who cares if it adds technical debt later if your goal is just to have something to show off to get investments? The goal of every startup is to get funding and an exit. The concern is not long term maintainability.
Even for the small subset of companies for which this is true, you need to have a successful exit at all. Technical debt is fine from that perspective, but only if the whole thing comes crashing down AFTER you sell. If it comes crashing down too soon, you have done nothing but waste a few years of your life.
Remember that most successful exits happen more than 5 years after founding. Having an AI vomit out a prototype in a week vs doing it yourself in 4 might get you seed funding marginally faster, but if it delays product development more than 3 weeks over the next 4-5 years it's still not worth it.
> Who cares if it adds technical debt later if your goal is just to have something to show off to get investments? The goal of every startup is to get funding and an exit. The concern is not long term maintainability
Not everyone works in startups, specifically to avoid this disgusting mentality
If I had to deal with this "build garbage quickly to get money and run before the house of cards collapses" mentality in my day to day life I would put my face into a wood chipper
I loathe people who think this way, and it is so miserable that all tech is becoming just a vehicle for this sort of grift
Nobody cares about tech debt when large orgs are happy to rewrite every workload each decade. Every reorg finds the debt, blames it on “the last guy” (now in management), and they replace some components with new tech. Rinse and repeat.
Source code just isn’t an asset anymore, and it’s been slowly growing since Serverless; genai just accelerated it, and “bucket o’ lambdas” is a valid architecture now.
I watched this happen decades ago. Smart coders knew about memory allocations. Okay coders just assumed that the garbage collector would handle it. One friend of mine wrote code that was 1000 times faster than the people in the next cubicle over. Why? Because he was careful with memory usage and didn't trigger the virtual memory thrashing.
Yeah, these new-fangled "compilers" will never catch on.
Programmers who rely on them will stop learning machine code, and won't know how their program really works. That's if the compiler actually compiles your code at all, without throwing an internal error, making you change your (correct) code around arbitrarily until it actually accepts it. But at least with an internal compiler error you know the compiler has broken - rather than it silently miscompiling your code to do the wrong thing.
But even then, even if the compiler accepts your code without barfing, and generates correct machine code from it, it still won't generate as efficient machine code as you could write by hand yourself.
Nope, these compilers will never catch on, and never get reliable enough to be useful for serious software engineering.
-- Some programmer circa 1975, probably, who lives in my head mumbling this to themselves whenever I'm sure generative-AI-based "programming" is a crock of shit. Although, to be fair, the 2005-era developer who is drunkenly ranting that UML diagrams will make programming 100x more productive any day now, is a handy counterpoint.
I mean, the difference between a computer and an LLM is correctness. The compiler MUST conform exactly to a spec. An LLM is useful precisely because it does not.
> One friend of mine wrote code that was 1000 times faster than the people in the next cubicle over
And did that 1000x speedup make a difference to users? Are we talking about an on-click event that now took 10μs instead of 10ms? Was this a 1000x speedup in a hot critical-path bottleneck, or was it an already quick in-memory post-processing operation that fired after waiting 30 seconds on a sluggish database query?
Sorry to doubt so much, but the vast majority of times someone boasts about a speedup like this, it turns out to be done for bragging rights rather than for the benefit of the project. A 1000x speedup is only impressive if you can show that the time you improved upon was actually a problem.
Must be it be swallowed up? Is it not possible that an improvement in productivity increases but still be net positive? I believe that's what happens in the real world.
Yes, please. But you know, these days, everyone is doing their own research, and think they nailed it. But it's often sloppy research, unfunded arguments, etc.
Just looking at the map in this article, and whatever source this comes from (some people don't seem to bother sharing sources) - it seems not accurate. (e.g. why is Greenland a different color than Denmark...)