The headline may be misleading. It initially read to me as though Intel is forming a new division to work on this. Instead, these folks are going to a new startup outside of Intel.
I do hope that moves like this accelerate RISC V development in a significant way. While there are a lot of RISC V cores in the wild, the top end of performance is not great. Way behind in terms of performance per watt. But the upwards trend is clear, all they need is a bit more gusto and it could be a matter of time until they are nipping at ARM/X86's heels.
I wonder how much is possible on that level. Not an expert myself, but I've read a critique on RISCV ISA by a ISA designer [1] which I think mentioned several design choices that may make it more difficult optimize for performance (e.g. longer pipelines, issues with branch prediction), an easy to understand issue is explained in the introduction of that critique.
You mean an ex-Arm engineer. Of which Arm has thousands. And not an ISA designer or (more relevantly) CPU designer, but as the post itself says a verification engineer.
Actual top high performance CPU designers, such as Jim Keller, say that RISC-V is just fine.
That so called "old saying" you mention makes no sense in the real world. From my experience top % engineers hate working with bottom % engineers since they massively drag things down for everyone, so firing the bottom % ones actually makes life better for the top % and convinces them to stay, and vice versa, keeping bottom % engineers around for long actually convinces top % engineers to leave since they want to work with equally competent people who can get shit done and not be around incompetent people who coast and watch the clock, or worse, drag everyone down.
Over here in the real world of bottom 10% firing company cultures: those top engineers are leaving the company because they are tired of the culture of infighting and backstabbing, because setting up to others to fail and blaming them is just as effective as doing good work. Also, your super helpful generalist collaborator just got fired because everyone on your small team is highly performing and top down ruling was to fire 1/10 of each team.
I cannot think of a successful tech company that is still applying the GE 10% management style. It has consistently been shown to fail.
Where do you see me promoting the GE style? I was talking about getting rid of the incompetent people that skilled people don't like to work with (if you disagree you were probably never paired with one) since then your own performance suffers and makes you want to leave ASAP. Not about firing the bottom ranked 10% according to some random stack ranking that can be gamed.
It makes perfect sense in context. You're assuming that the bottom % of engineers drag things down because they're bad. That's not what it's about.
Some companies imposed rather brutal policies of "we do an annual review and sack the bottom x% of employees". This is done irrespective of whether they are bad, or simply the lowest strata (by your, possibly flawed, estimation) but still good. Because workers often don't know which group they will fall into, it produces enormous stress throughout. Few people like working under those conditions so those that can, leave. Who will that be? Those most able to leave. Who were they? Your best workers. Goodbye top tier...
Be aware that some people who contribute least to the bottom line are actually still very useful.
This kind of management brutality I believe is mostly died out now, for good reason.
Obviously it excludes all of the connections, institutional knowledge, and other duties as assigned that typically make up the brunt of the magic that keeps things functional but that managers are generally too afraid to acknowledge or ask for resources for, if they're even aware.
He fired the one who told him that. It didn't matter, as long as the books looked great on the next quarter and the bonuses kept flowing, Mr Welch didn't give a fuck.
Often the product is "hard IP" which is a design which has been tuned and validated for a specific manufacturing process. A rough software analogy would be buying a license for a proprietary library that's been validated on a specific platforms. Labor and risk reduction.
I wonder which markets these risc-v processors will target. I have a 64 core Milk-V computer i'm testing compatibility with a Java application on. Looking forward to seeing where this goes.
I guess they are claiming they can make it better. Which given the team is likely. Making a CPU, specially one which is validated on a process, is a hard problem. There is definitely room for improvement for RISC-V processors.
It's patent licenses that prevent more x86-64 compatible CPUs appearing. All the best recent stuff is controlled by Intel and AMD, who have cross-licensing agreements with each other.
Even without the patents, there is a lot in x86 and it would take an order of magnitude more engineers to make a compatible x86 CPU than a RISCV with standard extensions.
They don't need to. Recompilation techniques like Rosetta2 are well understood now and they can add extensions to RISC-V processors that aid in reimplementing x86 behavior on top of RISC-V.
The linked company page says they are located in Hillsboro, Oregon[1]. Which is where Intel also has a campus[2]. My guess is that most people of this new company just moved across the street. How are non-competes in that state?
RISC-V is a mess of dozens of extensions vendors can decide to implement with the option of adding custom instruction as well. The application profiles are recommended sets of extensions to implement for certain use-cases e.g. microcontrollers without operating systems, embedded systems with full operating systems etc.
The base ISA is very minimal resulting in long inefficient code sequences for common GPU tasks. It would be tempting to implement the vector extension and see how well it maps to GPU workloads, but afaik nobody has done this in earnest. The more traditional way would be to implement lots of small cores and extend them with packed SIMD (at least 4x Int32+FP32).
I know a bit of AMD GPU machine code, and it seems based on what I heard that RISC-V has only support for 'GPU vector' instructions, not 'GPU scalar' instructions, namely no "hardware thread mask" definition.
And I heard that RISC-V GPU ISA was already extended with raytracing hardware instructions.
For sure, 3D pipeline programing will be specific to one implementation, but since vulkan/dx12, it would have to be hardware command ring buffers (like for AMD GPUs).
Meh all that happened in an environment with a complete lack of competition. Now it just seems like East Asia will eat everyone’s lunch while the ecosystem is healing.
> Meh all that happened in an environment with a complete lack of competition.
It was an insanely competitive environment. Shockley had the leg up due to being the literal inventor of the thing he was intending to manufacture and from getting an early investor to start his laboratory.
> Now it just seems like East Asia will eat everyone’s lunch while the ecosystem is healing.
They've got good labor prices but almost no legitimate IP.
They are making up ground in terms of IP even if it is things like Loongsuns LAA64 which is basically just MIPS and RISC V slammed together. China is hungry and their goverment is throwing a lot of investment behind that hunger.
Intel can just buy back their startup in three years if it worked and they need it. Probably cheaper than trying to develop whatever in-house and with better chance of success.
Hardware companies grow exponentially slower than SW companies. Given Intel spent (wasted?) over $10B each on Altera and Mobileye, I'm sure they'll be able to afford whatever this will be valued at.
Of course, the real question is whether it'll be Intel that buys them or one of Intel's competitors.
>Hardware companies grow exponentially slower than SW companies
Meh. That's only because that market is irrational and many SW companies are massively overhyped and overvalued. You can't possibly tell me with a straight face that WhatsApp is worth the 19 billion Facebook spent on it especially given WhatsApp is not making any money.
WhatsApp was a very smart buy for $19B considering its ubiquity around the world. There are countries where business basically does not happen unless you are using WhatsApp.
Also, Meta surely gleans tons of data from WhatsApp that they can use to better sell advertisements, even on other Meta properties.
Given Meta’s net income trajectory and the very, very widespread usage of WhatsApp, it should be safe to assume that WhatsApp is a valuable asset. If not explicitly making money, at the least it is not in a competitor’s hands, which itself is surely worth billions of dollars to Meta.
Your example is not doing you any favors to be honest.
The customer base capture along with acquiring the best social graph available (people usually only message through cellphones to other people who they are truly connected, unlike the social media) was well worth 19B.
I see a lot of trivial possibilities to monetize 2B+ users. You can either act on those possibilities now, or rely on the valuation to sell the company on. Sooner or later, someone will monetize, and that's what's important.
Unless the company stumbles upon some revolutionary novel computation approach (eg 10x more efficient) it seems a long road for a chip designer to become a financial success.
Even if it could buy a RISC chip today that was magically 20% better than Intel, who is going to buy it for serious commercial installations? It has taken years since Zen1 before AMD started to gain real presence in servers and laptops.
How long did it take Apple to make their ARM-based CPUs that totally outperform Intel CPUs in laptops? This group could very well do the same. The main problem, of course, is x86 compatibility, since most laptop users run Windows and not Linux and Windows only works on x86. They might be targeting a different market where x86 compatibility isn't important but power efficiency (or maybe some other metric that x86 sucks at) is: non-Windows mobile devices of some kind, for instance.
I do really wonder what their business strategy is here.
Apple bought P.A. Semi in 2008 and put them to work on ARM cores right away. On the other hand, there were noises about the A-series being faster than x86 already at the time of Apple A13 in 2019.
The goal of the A/M series chips isn't to be faster than x86, it's to have a better power/performance tradeoff. Going as fast as possible gets you "gamer laptops", which weren't good products.
Of course they are fast, which is part a nice coincidence, and part Intel falling badly behind TSMC.
From what I've read, the Apple M chips aren't faster than the very fastest Intel offerings. But it doesn't matter: what matters is how long your laptop will run on battery doing typical laptop work, and the M chips really put the x86 chips to shame here unfortunately. So agreed, what they're great at is power/performance tradeoff.
The ISA doesn't matter that much for performance at the high end. The most important difference between ISAs is security.
ARMv9 is the best here (several well designed security extensions) and x86 might be the worst (part because of variable length instructions and part because Intel can't ship SGX.) I don't know about RISCV in particular.
It'll be interesting to see how it plays out. If these succeed, it'll indicate constraint and be a testament to how bureaucratic and broken Intel really is inside.