Hacker Newsnew | past | comments | ask | show | jobs | submit | dansalvato's commentslogin

I see CarPlay (and CarPlay Ultra) as being for auto makers who don't want to put in all the effort to design and drive a good proprietary UI (CarPlay is a godsend in cars with crappy UI, i.e. most of them).

Rivian is a luxury vehicle brand with a first-class UI/UX. I imagine going with their own first-class UI and CarPlay Ultra would be a mess; two separate interfaces for the same controls, but laid out differently. Makes a lot more sense they'd be working with Apple to integrate more Apple features into their own UI, rather than having to maintain two separate first-class UIs that are bound to have discrepancies.

And there's the more obvious answer that they want the entire driving experience to feel like a Rivian experience, given how important that's been for luxury EVs on the software side. Supporting a canned OS would make the vehicle "feel" the same as every other car that also supports it.


Apple CarPlay Ultra supports customization as it's use in the new Aston Martin car(s?) shows.


From the last interview question in the article (pertaining to Arm):

> We don’t really try to steer the market one direction or another; we just want to make sure that good options are always supported.

Sounds like their priority is to support Steam on the hardware consumers are currently using. Given that, it makes sense they'd go Arm in the Steam Frame, because Fex alone is already a massive undertaking, and Snapdragon is a leading mobile chipset for performance and power efficiency.


Agree but I would argue RISC is catching up fast.


It’s not even close. Samsung alone ships around 400 million phones a year, that’s 400 million ARM devices a year from a single manufacturer. The number of total consumer ARM devices sold each year is in the billions.

RISC-V total total estimated market value is only around $10 billion, and I strongly suspect a single RISC-V chip cost more than a dollar. RISC-V manufacturing needs to increase something in the order of 1000X just to match ARM volumes, and even then it’ll be half a decade for RISC-V devices to build up meaningful market share of actual in-use devices, given there’s many billions of ARM devices out there which will remain perfectly usable for many years.


I can't believe these "smooth scrolling" scripts are still a thing. I was wondering why I was having a hard time scrolling the page on my phone, when I got to my PC and felt the reason.

It's incredible to think how many employees of this world-leading Web technology company must have visited this site before launch, yet felt nothing wrong with its basic behavior.


The first thing I noticed when seeing the SGI demos for the first time is that the menu UI is strikingly similar to the file select screen in Super Mario 64.

Of course, Nintendo 64 was developed in partnership with Silicon Graphics, so there's a clear connection, and I'm far from the first to make this observation. Still, I feel as though there must be some untold history where perhaps it was used as a placeholder menu early in development, but the team grew fond of it and eventually used the same effect for the final release.

Here's a decent comparison: https://www.resetera.com/threads/super-mario-64-took-its-3d-...


Mario 64 had undercurrents of a dreamy, abstract, dare-I-say vaporwave-y quality that I attribute to the undersung influence of SGI specifically and early American 3D animation in general on its development that I think is a big part of its enduring appeal; the Galaxies and Odyssey are technically superior and more polished and certainly classics in their own right, but even among younger generations it seems like Mario 64 remains the definitive 3D Mario.

My favourite demonstration of this is a comparison between The Secret Aquarium bonus stage [0] with one of the animations in The Mind's Eye [1] (technically this is from Symbolics rather than SGI, but 3D animators of the time were in metaphorical conversation with each other), but this is maybe the most explicit example of just how direct that connection was.

[0] https://www.youtube.com/watch?v=ARbWJX-P1oM

[1] https://www.youtube.com/watch?v=NYBes8ki3lo


Didn’t Jurassic Park have a similar interface running on the workstations?


This article inspired me to check if Google has something similar for Google Workspace, which also just increased its price due to a bunch of Gemini integration I have absolutely no need for.

As it turns out, they do—but it's hidden from the "Plans and Upgrade" page, which only shows the Standard plan and above. After some digging, I finally found an inconspicuous dropdown on my plan's billing page that had an option to downgrade my plan. Upon clicking it, I was taken back to the earlier "Plans and Upgrade" page, but this time, the Starter plan was made visible on the page.

It's exactly half the price of the Standard plan, just with less storage, no Gemini, and some restrictions on other enterprise features I've never even heard of. Pretty bizarre and upsetting that they completely hide the existence of the Starter plan like that.

I'm hoping I can eventually bring my reliance on Google services down to zero, whenever I can afford the effort it takes to migrate to something better.


The CRT effect is immediately what stood out to me as well. It's the first time I've ever looked at a "CRT filter" that really gave my eyes the sensation of looking at a real CRT, specifically Amiga. It's so good that I would compel the author to share it with Amiga emulator developers to see if there's any chance of it being implemented. Maybe except for the line that occasionally travels from bottom to top, that's more reminiscent to me of a camera artifact and not something I experience with my eyes on Amiga monitors.


Agreed about the moving line, also the effect is a bit overblown IMO. And, the scan lines should bend along at the top and bottom. They shouldn't be straight as they are now. Very good approximation though.


Let's be real, Commodore has no one to blame but themselves for squandering their 5-year lead in hardware and OS. They were carried hard by the passion of their engineers, but irredeemably greedy and soulless at the top. At Microsoft and Apple, engineers were the lifeblood from the very beginning. At Commodore, they were a spreadsheet column.


It's all relative; Commodore sounds like it was nirvana compared to Atari!


Kind of wild to imagine an alternate universe where Commodore and Atari were still big names in computing. Would have loved to see the Amiga continue to grow, it seemed so ahead of its time.


Commodore probably got more pleasant after Jack Tramiel left to go run Atari. Unfortunately it got left in the hands of Irving Gould, who treated the company as his personal piggy bank and looted it til there was nothing left.


Imagine if Commodore had built ARM, or something like it it. A 16 register RISC in the 80s was basically an instant win. They would have been in the CPU lead for a couple of years.

And they had the resources. I mean hell, Acron was a tiny company and they pulled of building an absolutely incredibly machine in the Archimedes.

Its just what you dare to do. If they were as bold as the original Amiga team, with VSLI Technology 2nm CMOS in the late 80s they could have built an incredible machine. I think they held on to long trying to do their own semiconductors.

Acron didn't have the bandwidth to really innovate on the graphics side of things, and because of their problems with OS, they never managed to get enough software on the platform. And they just didn't have the market either.

Commodore on the other hand, actually had a pretty high quality OS with lots of software and users already.


I think some of this stuff isn't the responsibility of HTML. If HTML already has a full autocomplete spec, isn't it the fault of browsers/extensions/OS if the implementation is broken? Or are you saying the spec is too ambiguous?

A lot of stuff becomes redundant under the framing that HTML is designed to provide semantics, not a user interface. How is a toggle button different from a checkbox? How are tabs different from <details>, where you can give multiple <details> tags the same name to ensure only one can be expanded at a time?

Image manipulation is totally out of scope for HTML. <input type="file"> has an attribute to limit the available choices by MIME type. Should there be special attributes for the "image" MIME type to enforce a specific resolution/aspect ratio? Can we expect every user agent to help you resize/crop to the restrictions? Surely, some of them will simply forbid the user from selecting the file. So of course, devs would favor the better user experience of accepting any image, and then providing a crop tool after the fact.

Data grid does seem like a weak spot for HTML, because there are no attributes to tell the user agent if a <table> should be possible to sort, filter, paginate, etc. It's definitely feasible for a user agent to support those operations without having to modify the DOM. (And yes, I think those attributes are the job of HTML, because not every table makes sense to sort/filter, such as tables where the context of the data is dependent on it being displayed in order.)

Generalized rant below:

Yes, there are pain points based on the user interfaces people want to build. But if we remember that a) HTML is a semantic language, not a UI language; and b) not every user agent is a visual Web browser with point-and-click controls, then the solution to some of these headaches becomes a lot less obvious. HTML is not built for the common denominator of UI; it's built to make the Web possible to navigate with nothing but a screen reader, a next/previous button, and a select/confirm button. If the baseline spec for the Web deviates from that goal, then we no longer have a Web that's as free and open as we like to think it is.

That may be incredibly obvious to the many Web devs (who are much more qualified than me) reading this, but it's not something any end user understands, unless they're forced to understand it through their use of assistive technology. But how about aspiring Web devs? Do they learn these important principles when looking up React tutorials to build some application? Probably not—they're going to hate "dealing with" HTML because it's not streamlined for their specific purpose. I'm not saying the commenter I'm replying to is part of that group (again, they're probably way more experienced than me), but it reminded me that I want to make these points to those who aren't educated on the subject matter.


The interesting thing about testing values (like testing whether a number is even) is that at the assembly level, the CPU sets flags when the arithmetic happens, rather than needing a separate "compare" instruction.

gcc likes to use `and edi,1` (logical AND between 32-bit edi register and 1). Meanwhile, clang uses `test dil,1` which is similar, except the result isn't stored back in the register, which isn't relevant in my test case (it could be relevant if you want to return an integer value based on the results of the test).

After the logical AND happens, the CPU's ZF (zero) flag is set if the result is zero, and cleared if the result is not zero. You'd then use `jne` (jump if not equal) or maybe `cmovne` (conditional move - move register if not equal). Note again that there is no explicit comparison instruction. If you don't use O3, the compiler does produce an explicit `cmp` instruction, but it's redundant.

Now, the question is: Which is more efficient, gcc's `and edi,1` or clang's `test dil,1`? The `dil` register was added for x64; it's the same register as `edi` but only the lower 8 bits. I figured `dil` would be more efficient for this reason, because the `1` operand is implied to be 8 bits and not 32 bits. However, `and edi,1` encodes to 3 bytes while `test dil,1` encodes to 4 bytes. I guess the `and` instruction lets you specify the bit size of the operand regardless of the register size.

There is one more option, which neither compiler used: `shr edi,1` will perform a right shift on EDI, which sets the CF (carry) flag if a 1 is shifted out. That instruction only encodes to 2 bytes, so size-wise it's the most efficient.

The right-shift option fascinates me, because I don't think there's really a C representation of "get the bit that was right-shifted out". Both gcc and clang compile `(i >> 1) << 1 == i` the same as `i & 1 == 0` and `i % 2 == 0`.

Which of the above is most efficient on CPU cycles? Who knows, there are too many layers of abstraction nowadays to have a definitive answer without benchmarking for a specific use case.

I code a lot of Motorola 68000 assembly. On m68k, shifting right by 1 and performing a logical AND both take 8 CPU cycles. But the right-shift is 2 bytes smaller, because it doesn't need an extra 16 bits for the operand. That makes a difference on Amiga, because (other than size) the DMA might be shared with other chips, so you're saving yourself a memory read that could stall the CPU while it's waiting its turn. Therefore, at least on m68k, shifting right is the fastest way to test if a value is even.


That instruction only encodes to 2 bytes, so size-wise it's the most efficient.

In isolation it's the smallest, but it's no longer the smallest if you consider that the value, which in this example is the loop counter, needs to be preserved, meaning you'll need at least 2 bytes for another mov to make a copy. With test, the value doesn't get modified.


That is true, I deliberately set up an isolated scenario to do these fun theoretical tests. It actually took some effort to stop the compiler from being too smart, because it would want to transform the result into a return value, or even into a pointer offset, to avoid branching.


> On m68k, shifting right by 1 and performing a logical AND both take 8 CPU cycles. But the right-shift is 2 bytes smaller

There's also BTST #0,xx but it wastefully needs an extra 16 bits say which bit to test (even though the bit can only be from 0-31)

> That makes a difference on Amiga, because (other than size) the DMA might be shared with other chips, so you're saving yourself a memory read that could stall the CPU while it's waiting its turn.

That's a load-bearing "could". If the 68000 has to read/write chip RAM, it gets the even cycles while the custom chips get odd cycles, so it doesn't even notice (unless you're doing something that steals even cycles from the CPU, e.g. the blitter is active and you set BLTPRI, or you have 5+ bitplanes in lowres or 3+ bitplanes in highres)


> There's also BTST #0,xx but it wastefully needs an extra 16 bits say which bit to test (even though the bit can only be from 0-31)

That reminds me, it's theoretically fastest to do `and d1,d0` e.g. in a loop if d1 is pre-loaded with the value (4 cycles and 1 read). `btst d1,d0` is 6 cycles and 1 read.

> the blitter is active and you set BLTPRI

I thought BLTPRI enabled meant the blitter takes every even DMA cycle it needs, and when disabled it gives the CPU 1 in every 4 even DMA cycles. But yes, I'm splitting hairs a bit when it comes to DMA performance because I code game/demo stuff targeting stock A500, meaning one of those cases (blitter running or 5+ bitplanes enabled) is very likely to be true.


> it's theoretically fastest to do `and d1,d0` e.g. in a loop

That's true, although I'd add that ASR/AND are destructive while BTST would be nondestructive, but we're pretty far down a chain of hypotheticals at this point (why would someone even need to test evenness in a loop, when they could unroll the loop to doing 2/4/6/8 items at a time with even/odd behaviour baked in)

> I thought BLTPRI enabled meant the blitter takes every even DMA cycle it needs, and when disabled it gives the CPU 1 in every 4 even DMA cycles

Yes, that is true: https://amigadev.elowar.com/read/ADCD_2.1/Hardware_Manual_gu... "If given the chance, the blitter would steal every available Chip memory cycle [...] If DMAF_BLITHOG is a 1, the blitter will keep the bus for every available Chip memory cycle [...] If DMAF_BLITHOG is a 0, the DMA manager will monitor the 68000 cycle requests. If the 68000 is unsatisfied for three consecutive memory cycles, the blitter will release the bus for one cycle."

> one of those cases is very likely to be true

It blew my mind when I realised this is probably why Workbench is 4 colours by default. If it were 8, an unexpanded Amiga would seem a lot slower to application/productivity users.


I'm thrilled to see someone bring up Magicore! I don't do much publicity because it's been a quiet 2 years working on the game engine, without much flashy content to show for it. But we're ramping up to begin production of the final game assets, so I anticipate having a lot more to share this coming year.

Here is a small demo I threw together for AmiWest 2024, from last October: https://youtu.be/xIYrhKHEPEA

I also have a personal blog which is largely a development blog for Magicore (https://dansalva.to/blog). My next post will be about a recent feature where I use Amiga's hardware acceleration to draw rays of light that can be obstructed by passing objects. Proof of concept video here: https://youtu.be/rFWFTuWx82M


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: