Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Senior Intel CPU architects splinter to develop RISC-V processors (tomshardware.com)
156 points by osnium123 on Aug 26, 2024 | hide | past | favorite | 73 comments


The headline may be misleading. It initially read to me as though Intel is forming a new division to work on this. Instead, these folks are going to a new startup outside of Intel.


Moonshot teams incubated inside of a huge, slow bureaucracy rarely work. Spinning it off as a startup of its own gives it the best shot.


English is my second language but I interpreted "splinter" as leaving the org entirely. Was the title changed some time ago or is my English that bad?


Splinter definitely implies separating from the parent entirely. A “spinoff” would be a related corporate entity.


Did they change it later? Splinter does not seem likely to be interpreted as forming an internal group.


I do hope that moves like this accelerate RISC V development in a significant way. While there are a lot of RISC V cores in the wild, the top end of performance is not great. Way behind in terms of performance per watt. But the upwards trend is clear, all they need is a bit more gusto and it could be a matter of time until they are nipping at ARM/X86's heels.


> the top end of performance is not great

I wonder how much is possible on that level. Not an expert myself, but I've read a critique on RISCV ISA by a ISA designer [1] which I think mentioned several design choices that may make it more difficult optimize for performance (e.g. longer pipelines, issues with branch prediction), an easy to understand issue is explained in the introduction of that critique.

[1] https://news.ycombinator.com/item?id=24958423


> critique on RISCV ISA by a ISA designer

You mean an ex-Arm engineer. Of which Arm has thousands. And not an ISA designer or (more relevantly) CPU designer, but as the post itself says a verification engineer.

Actual top high performance CPU designers, such as Jim Keller, say that RISC-V is just fine.


The old saying....Some variation of.... If you fire your bottom 10%, your top 10% will also leave.


That so called "old saying" you mention makes no sense in the real world. From my experience top % engineers hate working with bottom % engineers since they massively drag things down for everyone, so firing the bottom % ones actually makes life better for the top % and convinces them to stay, and vice versa, keeping bottom % engineers around for long actually convinces top % engineers to leave since they want to work with equally competent people who can get shit done and not be around incompetent people who coast and watch the clock, or worse, drag everyone down.


Over here in the real world of bottom 10% firing company cultures: those top engineers are leaving the company because they are tired of the culture of infighting and backstabbing, because setting up to others to fail and blaming them is just as effective as doing good work. Also, your super helpful generalist collaborator just got fired because everyone on your small team is highly performing and top down ruling was to fire 1/10 of each team.

I cannot think of a successful tech company that is still applying the GE 10% management style. It has consistently been shown to fail.


Where do you see me promoting the GE style? I was talking about getting rid of the incompetent people that skilled people don't like to work with (if you disagree you were probably never paired with one) since then your own performance suffers and makes you want to leave ASAP. Not about firing the bottom ranked 10% according to some random stack ranking that can be gamed.


How would you actually pull that off at corp scale, if not via stack ranking?


It makes perfect sense in context. You're assuming that the bottom % of engineers drag things down because they're bad. That's not what it's about.

Some companies imposed rather brutal policies of "we do an annual review and sack the bottom x% of employees". This is done irrespective of whether they are bad, or simply the lowest strata (by your, possibly flawed, estimation) but still good. Because workers often don't know which group they will fall into, it produces enormous stress throughout. Few people like working under those conditions so those that can, leave. Who will that be? Those most able to leave. Who were they? Your best workers. Goodbye top tier...

Be aware that some people who contribute least to the bottom line are actually still very useful.

This kind of management brutality I believe is mostly died out now, for good reason.


>> This kind of management brutality I believe is mostly died out now, for good reason.

Bad news, after all the layoffs over the last two years, stack ranking and quotas are coming back.


In the real world companies have no idea who the top and bottom 10% are because they use extremely low quality productivity metrics


Obviously it excludes all of the connections, institutional knowledge, and other duties as assigned that typically make up the brunt of the magic that keeps things functional but that managers are generally too afraid to acknowledge or ask for resources for, if they're even aware.


> From my experience top % engineers hate working with bottom % engineers since they massively drag things down for everyone

You live in a different world than I do then, because the overwhelming majority of people don't care at all, let alone they would "hate".


They care when those people impacts their own performance.


That is not supporting evidence for a policy of "fire the worst X%" being anything other than a recipe for internecine warfare.


Double the savings! (This quarter).


No one told Jack Welch


He fired the one who told him that. It didn't matter, as long as the books looked great on the next quarter and the bonuses kept flowing, Mr Welch didn't give a fuck.


Welch was an addict (addicted to growth for himself) that made his problems everyone else's problems.


Good. Those high talent genius minds deserved better. They deserve all the support and love they can get on their new ventures.


Do these companies simply sell verilog source trees? Or do they deliver fully packaged wafers?


Often the product is "hard IP" which is a design which has been tuned and validated for a specific manufacturing process. A rough software analogy would be buying a license for a proprietary library that's been validated on a specific platforms. Labor and risk reduction.


I wonder which markets these risc-v processors will target. I have a 64 core Milk-V computer i'm testing compatibility with a Java application on. Looking forward to seeing where this goes.


I thought Java had a really good test suite to ensure platform/jvm compatibility. Are you running on an official Java machine?


How’s it going?


What does it mean by Open Specification Core IP?

There are plenty of RISC-V CPU IP out there. What sort of target market are they aiming at ?


I guess they are claiming they can make it better. Which given the team is likely. Making a CPU, specially one which is validated on a process, is a hard problem. There is definitely room for improvement for RISC-V processors.


I wonder if they have any non-competes that'll prevent them from going to develop x86 processors.


It's patent licenses that prevent more x86-64 compatible CPUs appearing. All the best recent stuff is controlled by Intel and AMD, who have cross-licensing agreements with each other.

https://en.wikipedia.org/wiki/List_of_x86_manufacturers, https://en.wikipedia.org/wiki/X86


Even without the patents, there is a lot in x86 and it would take an order of magnitude more engineers to make a compatible x86 CPU than a RISCV with standard extensions.


They don't need to. Recompilation techniques like Rosetta2 are well understood now and they can add extensions to RISC-V processors that aid in reimplementing x86 behavior on top of RISC-V.


As a note, non-competes are for the most part illegal in California, from my understanding.


The linked company page says they are located in Hillsboro, Oregon[1]. Which is where Intel also has a campus[2]. My guess is that most people of this new company just moved across the street. How are non-competes in that state?

1. https://www.aheadcomputing.com/#comp-lzm51d9d

2. https://www.intel.com/content/www/us/en/corporate-responsibi...


And some folks are still questioning why RISC-V ISA is much less toxic than any PI locked CPU ISA.

I think RISC-V has also some GPU oriented instructions, hasn't it?


RISC-V is a mess of dozens of extensions vendors can decide to implement with the option of adding custom instruction as well. The application profiles are recommended sets of extensions to implement for certain use-cases e.g. microcontrollers without operating systems, embedded systems with full operating systems etc.

The base ISA is very minimal resulting in long inefficient code sequences for common GPU tasks. It would be tempting to implement the vector extension and see how well it maps to GPU workloads, but afaik nobody has done this in earnest. The more traditional way would be to implement lots of small cores and extend them with packed SIMD (at least 4x Int32+FP32).


I know a bit of AMD GPU machine code, and it seems based on what I heard that RISC-V has only support for 'GPU vector' instructions, not 'GPU scalar' instructions, namely no "hardware thread mask" definition.

And I heard that RISC-V GPU ISA was already extended with raytracing hardware instructions.

For sure, 3D pipeline programing will be specific to one implementation, but since vulkan/dx12, it would have to be hardware command ring buffers (like for AMD GPUs).


So, this team wants to build a “RISC-V, but you have pay for a license” portfolio, and people are celebrating.

I must be missing something.


You want them to do it for free? Do you know how hard that is


It means another participant in a free market.

This is very welcome.


Intel losing design talent to startups.


It's the cycle of life. Intel was founded by people leaving Fairchild, and that in turn was founded by the Traitorous Eight from Shockley.

The ecosystem is healing.


Meh all that happened in an environment with a complete lack of competition. Now it just seems like East Asia will eat everyone’s lunch while the ecosystem is healing.


> Meh all that happened in an environment with a complete lack of competition.

It was an insanely competitive environment. Shockley had the leg up due to being the literal inventor of the thing he was intending to manufacture and from getting an early investor to start his laboratory.

> Now it just seems like East Asia will eat everyone’s lunch while the ecosystem is healing.

They've got good labor prices but almost no legitimate IP.


They are making up ground in terms of IP even if it is things like Loongsuns LAA64 which is basically just MIPS and RISC V slammed together. China is hungry and their goverment is throwing a lot of investment behind that hunger.

https://en.wikipedia.org/wiki/Loongson


> They've got good labor prices but almost no legitimate IP.

They don't care about your IP. If it can be manufactured and sold, they will do it. And that's how it should be.


Intel can just buy back their startup in three years if it worked and they need it. Probably cheaper than trying to develop whatever in-house and with better chance of success.


They'll be laughed out of the room.

As happened when they tried this with SiFive.


If the startup is successful it won’t be cheap to buy


In fact, Intel already tried and failed to buy SiFive for $2B. https://www.tomshardware.com/news/intel-failed-to-buy-sifive


Hardware companies grow exponentially slower than SW companies. Given Intel spent (wasted?) over $10B each on Altera and Mobileye, I'm sure they'll be able to afford whatever this will be valued at.

Of course, the real question is whether it'll be Intel that buys them or one of Intel's competitors.


>Hardware companies grow exponentially slower than SW companies

Meh. That's only because that market is irrational and many SW companies are massively overhyped and overvalued. You can't possibly tell me with a straight face that WhatsApp is worth the 19 billion Facebook spent on it especially given WhatsApp is not making any money.


WhatsApp was a very smart buy for $19B considering its ubiquity around the world. There are countries where business basically does not happen unless you are using WhatsApp.

Also, Meta surely gleans tons of data from WhatsApp that they can use to better sell advertisements, even on other Meta properties.


But there are no ads in WhatsApp, how do they make money from those using it?


https://business.whatsapp.com/resources/resource-library/wha...

https://scontent-iad3-1.xx.fbcdn.net/v/t39.8562-6/454446299_...

Given Meta’s net income trajectory and the very, very widespread usage of WhatsApp, it should be safe to assume that WhatsApp is a valuable asset. If not explicitly making money, at the least it is not in a competitor’s hands, which itself is surely worth billions of dollars to Meta.

https://www.macrotrends.net/stocks/charts/META/meta-platform...


Your example is not doing you any favors to be honest.

The customer base capture along with acquiring the best social graph available (people usually only message through cellphones to other people who they are truly connected, unlike the social media) was well worth 19B.


But there are no ads in WhatsApp, how do they make money from those using it?

What use spending big on buying a large userbase if you don't monetize it?


There are ads in WhatsApp. I routinely get pushed to join what are essentially corporate ad-delivery channels.

Right now in my "updates" tab i am proposed stuff like Real Madrid C.F (a while ago it was another team), WhatsApp itself, Juventus and Arsenal.

They're dumb as well, because I have no interest in football.

That's spam advertising, they're most definitely getting money out of that.


Companies are valued on potential earnings.

I see a lot of trivial possibilities to monetize 2B+ users. You can either act on those possibilities now, or rely on the valuation to sell the company on. Sooner or later, someone will monetize, and that's what's important.


nVidia?


Unless the company stumbles upon some revolutionary novel computation approach (eg 10x more efficient) it seems a long road for a chip designer to become a financial success.

Even if it could buy a RISC chip today that was magically 20% better than Intel, who is going to buy it for serious commercial installations? It has taken years since Zen1 before AMD started to gain real presence in servers and laptops.


How long did it take Apple to make their ARM-based CPUs that totally outperform Intel CPUs in laptops? This group could very well do the same. The main problem, of course, is x86 compatibility, since most laptop users run Windows and not Linux and Windows only works on x86. They might be targeting a different market where x86 compatibility isn't important but power efficiency (or maybe some other metric that x86 sucks at) is: non-Windows mobile devices of some kind, for instance.

I do really wonder what their business strategy is here.


It took Apple ten years. Apple A4 was released in 2010, M1 in 2020.


Apple bought P.A. Semi in 2008 and put them to work on ARM cores right away. On the other hand, there were noises about the A-series being faster than x86 already at the time of Apple A13 in 2019.

https://en.wikipedia.org/wiki/P.A._Semi


The goal of the A/M series chips isn't to be faster than x86, it's to have a better power/performance tradeoff. Going as fast as possible gets you "gamer laptops", which weren't good products.

Of course they are fast, which is part a nice coincidence, and part Intel falling badly behind TSMC.


From what I've read, the Apple M chips aren't faster than the very fastest Intel offerings. But it doesn't matter: what matters is how long your laptop will run on battery doing typical laptop work, and the M chips really put the x86 chips to shame here unfortunately. So agreed, what they're great at is power/performance tradeoff.


RISCV chips that are way slower than Intel that are already being used in serious (?) commercial installations [that was a lot of words].


The ISA doesn't matter that much for performance at the high end. The most important difference between ISAs is security.

ARMv9 is the best here (several well designed security extensions) and x86 might be the worst (part because of variable length instructions and part because Intel can't ship SGX.) I don't know about RISCV in particular.


It'll be interesting to see how it plays out. If these succeed, it'll indicate constraint and be a testament to how bureaucratic and broken Intel really is inside.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: