It's static vs realtime graphics issue rather than mobile vs desktop environment. Basically Apple is treating whole graphics as hard-realtime condition, and most people don't understand that.
When you cross the line of realtime, many things magically happen. Anything non-deterministic becomes whole source of problem, and GC is one of the biggest. Under realtime conditions, manual memory management is easier than dealing with non-deterministic black box.
So even on desktop, trivial GC is inferior architecture for realtime program by the definition of realtime. There's no room for non-deterministic deferred batch operation in 60fps graphics. Anyway specially designed GC may overcome this, but I couldn't see any actual implementation on the tech market.
Unlike old days, people want more dynamically and more smoothly working apps. Like games, all apps will become fully realtime eventually. This means you need real-time graphics approach. rather than static graphics.
If you're lucky that your users don't care those stuffs, then you're safe. But Apple is the company needs to treat single frame drop as failure. That's why they don't use GC and finally deprecated it.
No. But Apple has a shockingly small number of employees relative to other big software names, and they have had historically had much fewer third-party developers than other platforms.
As a result, they have chosen to consolidate the mindshare onto one memory management technology rather than dilute the talent pool with two competing proposals. Given that iOS is much, much more popular than OS X, they have chosen to standardize on the technology that works best for the popular platform.
I would respectfully have to disagree. I've been programming for about 30 years, but finally broke down and bought my first Mac about a year ago. It's a nice platform, and the native apps are a joy to use, compared to the sludge that most Windows apps are.
Oh, and it lets me do most of what I have been doing on Linux the last 18 years. Not quite everything, but pretty close.
No matter how much you tweak and tune a GC, there are still going to be times when it destroys "locality" in a hierarchical memory system and causes some kind of pause. For a small form entry program, this is probably negligible. As the size of the program and/or its data grows, or the time constraints grow tighter, these pauses will become more and more unacceptable.
Most of my work the last 10 years has been in Java, but the JVM is not the one true path to enlightenment. TMTOWTDI :-)
Yes, insofar as their desktop operating system is also their laptop operating system.
A prominent theme at WWDC this year was all of the changes they've been doing to improve power management. That garbage collection thread always running in the background is going to interfere with your efforts to save battery. A predictable strategy based on automatic reference counting, on the other hand...
Have you looked into what ARC really is and does? If not I suggest you do.
With ARC it replaces the manual retain/release management that used to be required and the compiler manages that for you. To write it most[1] of the time is just like if you were using GC[2][3]. The additional thought compared to garbage collection is pretty small and the benefits just in terms of the determinism are quite valuable while debugging not to mention the improved performance.
[1] Except when interfacing with non-ARC code.
[2] Some garbage collection algorithms can cope with reference cycles which ARC can't so you need to make sure the loop is broken by using a weak reference.
[3] As with Java you still need to consider object lifetimes to some extent particularly when requesting notfications or setting callbacks.
Objective C GC wasn't around for a long time, and never really worked well; in particular, using it with C stuff (and Objective C apps tend to use C stuff all the time, directly; there's no equivalent of the JNI) was very messy and error-prone.