While it is obviously not unrelated to the pandemic chaos, there is a deeper issue. As Nassim Taleb has often pointed out, the same things that make a system "efficient" also can often be viewed as making it fragile. "Just-in-time" means "no buffer". We have, for decades, been doing as much as possible to make all our economic systems as efficient as possible, which is to say with no extra capacity.
Thus, if it had not been the pandemic, it would eventually have been something else. Sooner or later, there is always a shock to the system. If you have made sure that every step in a long complex chain is optimized to have very little slack, the thing you have optimized for is fragility.
Just in time doesn't necessarily mean no buffer, the goal is to minimize excessive stockpiling and keep enough for continuous production, increasing stockpile when required.
Toyota, some might say THE pioneer of Just-in-time manufacturing, was one of the car manufacturers least impacted by the component shortage precisely because they started stockpiling very early on after they saw the upcoming issue.
Here's a quote from another Bloomberg article specifically on Toyota[1]:
"Toyota asks its Tier 1 suppliers to input detailed information about their most obscure parts and materials providers in a complex database that it maintains. Using this system to glean information about, say, a single headlight Toyota purchases for one of its cars, it can get information as granular as the names and locations of the companies that make the materials that go into surface treatments used on those headlights’ lenses and even the producers of the lubricants used on the rubber pieces in the assembly, Toyota spokeswoman Shiori Hashimoto says.
These lines of communication alerted the company early on that it needed to stockpile chips."
This is actually also discussed by Nassim Taleb in his book which I assume the parent comment is referencing.
He says Toyota is one of the few examples where just-in-time was not distorted and warped. So Toyota implements true just-in-time, which can be robust, with buffers. But most other companies implement a half-assed version that is very fragile.
While attending a training event, I talked to a pilot who flew cargo for a Tier 1 supplier to North American auto manufacturers. He had several stories of flying a jet with just a handful of boxes on board to avoid the line-down penalties.
My favorite was tower advising them they’d be fined for a departure during local curfew. “Roger, we better get our money’s worth then” and did a max performance takeoff and turn on course with whatever parts they were carrying.
We have a local field with an 11P-7A curfew (landing/takeoff surcharge technically).
I’m not sure it does much to keep the airport quiet as I’ve been in a holding orbit of three low-altitude airplanes from 6:45 AM to 7:00 with all of us waiting to avoid the curfew fee.
So, instead of slipping relatively quietly into the field from a random route to a straight-in, neighbors got to hear me orbit for three rectangular patterns over the same ground points plus two other airplanes doing the same low-altitude holding. If I were a neighbor, I’d much rather the simple straight-in to be encouraged.
The sports teams all just pay the fee; I never had to break the 11PM side while I was based there, but I would have as well. (It was less than an hour’s worth of gas, so it wouldn’t make sense to divert and reposition later.) I suspect the curfew makes people feel good while making the actual experience worse.
The curfew at Sydney is $1M (USD 730K), so large that airlines go to considerable lengths not to violate it, making sure they have accurate forecasts for the winds from LAX and running predictive models for various landing scenarios. They will really burn avgas if they can squeak it in before curfew, and they have been known to divert flights to Canberra or Brisbane and put the passengers into a hotel overnight rather than risk it.
Sometimes those curfew fines actually cause flights not to depart. Some years ago, I was sitting on the tarmac at John Wayne for about an hour, hoping to make it just under the nighttime curfew, but ultimately the flight was canceled and I had to get a shuttle to LAX and an alternative flight in the morning.
John Wayne is more forceful than some for sure. I flew a small plane their once and took off just at curfew - the runway lights went off the moment my wheels left the ground.
If they turned off before you passed the departure end, I’d raise that as safety concern (ASRS at a minimum, but this is something that I’d probably take straight to the FSDO).
It's been a while, but flying into Long Beach from a connection in Phoenix, the scheduled plane was delayed somewhere, they had boarded us on a replacement and then realized the plane was too loud, so they rushed us to a different plane (in the next terminal) that had the right noise reduction equipment.
Also, my Dad once had a late-ish flight to Long Beach that got diverted to LAX because delays pushed it past the curfew.
This is not quite accurate. Toyota began stockpiling materials a while back after having its supply chain disrupted. Having extra inventory, no matter how well-implemented, is actually an anti-pattern in lean manufacturing -- the good folks at Toyota are simply wise enough to know when they they should embrace lean vs. when they should back off it a bit.
The difference is that Toyota is a uniquely intelligent company with respect to supply chain.
The average American manufacturing company is run by a CFO operating with reports out of SAP or Peoplesoft. Their performance is measured by fiscal performance, so running to the penny and having no inventory benefits them more than making the company resilient. Wall St rewards quarterly performance, not resilience.
But if the optimal strategy only falls apart for uncommon events that you can't determine when they happen, how much of an incentive is there to run a conservative/gambling strategy like that?
It depends on your objectives. One bad event can put many companies out of business. The auto industry is a good example because US-Canadian border issues and snow can fubar things for half the year.
Look at companies like Boeing with absurdly complex supply chains. The inventory numbers look good, but the factories are idle when a truckload of magic bolts is stuck in a blizzard in South Dakota.
It also increases actual cost. I supported a GE business unit as a supplier for awhile. We hosed them for stupid last minute orders due to this sort of thing. They would pay more for expedited shipping, overtime, waste money on leasing stuff to avoid capex, etc.
There is no optimal strategy because there is no perfect knowledge. That's why good CEOs are paid their weight in gold. A vision and steady hands can pay off handsomely.
Depends completely on the risk vs reward. Black swans are more common than can be assumed from modeling past data so in general a slightly more conservative strategy than industry standard is more likely to win.
For example COVID style shutdowns could be reasonably modeled at X% per year after talking to a disease specialist. But, not all of them are going to be from diseases.
All of the car companies, dialed down orders at the start of the pandemic. Now they are all trying to stockpile chips, due to not enough supply. I imagine (no data to support) the increased size stockpiles are a large part of what makes the problem worse. Kinda like TP really.
Also, Toyota is now cutting 40 percent of global production due to chip shortage.
Any system with signalling delay will experience this effect. Changes will get amplified across layers. That's also a common way to die from a viral infection: your immune system goes into overdrive from too many signals of infection from different tissue cells. A common treatment is steroids, which dampen immune response.
Is an inherent instability in any system with long/slow information chains. It's more a failure mode of central planning than of free markets. We had a few companies making bad decisions based on bad demand forecasting. This is a problem mitigated by stock exchanges (commodities).
What you fail to mention (although you linked to an article that covers it) - is that Toyota was prepared due to the setbacks they suffered in the Tsunami. If it wasn't for that, they would not have stockpiled for and been ready for a pandemic. Even so, they are now dealing with chip shortages as well.
The thing is that you can not apply this method (suddenly stockpile because of anticipated shortage) globally. Now of course, putting gratuitous buffer everywhere would not be the panacea either. But some buffers may be needed to improve global resiliency.
One has to optimize to reduce fragility, but when massive chip (or anything else) shortages start to appear for extended periods it is obviously already way to late. And way too downstream if your strategy was just a punctual stockpile decided by a single company. Because that probably would have been impossible for everybody to apply this punctual strategy at the same time...
One advantage of starting to stockpile ahead of a crisis is that production is not yet affected much by the crisis, and the total amount buffered could be larger.
They started stockpiling key components after the earthquake roughly 8 years ago. You could do this globally if everyone took the Toyota approach and only stockpiled components likely to suffer disruption.
Seems like a low-on-details article with PR-like flavor.
According to this, Toyota cut production by 40%, though it does acknowledge that Toyota took less of a hit vs industry:
“New cars often include dozens of microchips but Toyota benefited from having built a larger stockpile of chips - also called semiconductors - as part of a revamp to its business continuity plan, developed in the wake of the Fukushima earthquake and tsunami a decade ago.”
What's the quote about selling someone a product whose faults will occur when you're years gone and half way around the planet? I wonder how this mentality is avoided/regarded in Japan.
When many of the quality-focused companies in Japan took off they focused on gaining market share primarily through gaining repeat customers. A lot of their sales machinery was constructed around the idea of building a personal relationship with the customer to predict their needs as they arise.
This was in part because the domestic Japanese market was so small. Of course, it needs support from quality development and manufacturing. But it's also efficienct because it's much cheaper to sell something to someone who trusts you than to get it out by cold calling.
I think this papers backs up pretty well and goes into a lot more details of why it matters to focus on the flow. In fact it’s crucial that such issues that stop production including predicting and identifying demand-supply mismatch is essential for a smooth flow of operations. BUT, humans are humans and we plan for the best case scenario without giving due consideration for failure modes. Manufacturing is hardly an exception.
I wonder if they track the financial status of these far upstream suppliers. I've been doing some work with supply chain monitoring and often smaller companies in Europe are hesitant to share information on financial performance etc
As your link says Toyota has not been materially impacted until now, whereas all the other manufacturers had to scale back their production way earlier (I'm seeing articles from January and March). 8 months longer runway than everyone else is a pretty good deal.
The Toyota Production System is /the/ precursor to “Lean Manufacturing”, you could say that Toyota wrote the book on “lean”. While there are many companies that try (and fail) to replicate TPS, the Toyota Way is the standard by definition.
perl4ever had an interesting comment that they deleted after being downvoted for some reason. I'm happy to take the downvotes because I think their take was a thoughtworthy counterpoint.
* * *
>"Just-in-time" means "no buffer"
No, it doesn't. It just means the buffer is somewhere other than where you assume it's necessary. That may in fact be a net gain.
Computers and cars and whatever don't wear out instantly. It will be a long time until all the newest equipment that already exists is old enough it absolutely has to be scrapped.
In some other thread on HN, someone said that everything depends on manufacturing, and services don't mean anything without manufacturing capacity.
Services! Like maintaining and repairing what's already been manufactured! Like, for an extreme example, what happened in Cuba when they couldn't get new cars for a long time. Or during WWII when civilians couldn't buy new cars.
Day in and day out, people talk about how terrible a disposable society is, and then we have a crisis that requires like 1% less disposal and it's the end of society.
This comment was probably down-voted because it completely misses the fact that not everyone already has access to the needed resources. Sure, even if you already have everything you want/need now, and you own high enough quality parts, you may be golden for a decade or so. Eventually though, you'll need replacements. If access to those replacements is limited, how much value does the rest of your scrap really have?
Creative outlets for junk are going to continue to become more and more fashionable as we struggle to meet our values and practical prices.
> It just means the buffer is somewhere other than where you assume it's necessary.
The rest was an example of one place you might see such a buffer. In practice, there are many such buffers, of varying capacity and time duration. For transportation, there’s the households who have 3 cars but could get by with only 2: they’ll sell their extra car once the price makes it worth their while. There’s the cyclist who’d like to have a car for out-of-city trips, but decides to hold off until inventory catches up (a demand-side buffer, if you will). Many more “buffers” could be imagined.
And, well, the market libertarian might feel really satisfied over the last year of commodity prices. Sure, supply constricted and prices rose, but at no point did we see any mass crises where people became stranded somewhere because they had no transportation option. For the vast majority of people, they just had to make convenience/cost tradeoffs that were a little bit different than before.
> but at no point did we see any mass crises where people became stranded somewhere because they had no transportation option.
You are just not paying enough attention. As prices rise the poor become poorer. How can you possibly claim that nobody is stranded by the increase of prices?
I think we’re maybe speaking past each other. I don’t disagree that things have become more expensive — and hence that the poor can own relatively less than they could before.
But, no. I haven’t heard stories of people being stranded in mass. I meant that word literally — because of the context of transportation — if that wasn’t clear? Even in the days of mass lockdown: I saw huge lines at food banks in suburban and even rural areas, but even those who are in a position to wait four hours for a few days of essential goods seem to have still found ways to get there. That observation is subject to some obvious biases, and yes I’m sure some people did fall through the cracks, fatally. But no: I truly haven’t witnessed what I would consider mass crises caused by people losing access to transportation. Despite massive shocks to a system of JIT manufacturing, there wasn’t a collapse of the supply side in the sense that you would have expected had you looked at inventory, divided by the rate of consumption, and thought “that’s how many weeks it’ll be until anyone who doesn’t own some form of transportation will be physically stranded”. Whatever services and goods existed two years ago, equivalents can by and large still be found today, albeit at a possibly-increased price or a possibly-decreased utility. They may have dimmed, but “the lights stayed on”. Is the world free of problems? No.
I don't think the comment completely ignores that. As the Cuba example demonstrates, the potential for a strong second hand market for existing repairable hardware is another form of buffer.
One of the issues is that repairability and/or durability have also been decreasing for many categories of goods.
I also got downvoted for stating that I feel I have enough HW and can live a few years off what is there. I guess people have been trained to get the newest thing every year, that it now hurts them bad to not be able to do so for a few months. And yes, I agree, this could be a wonderful time to actually strengthen our knowhow regarding keeping things alive, instead of just throwing them away. Sounds like a great chance for the climate change people. How is the market for used equipment doing btw?
To speak to a "long perspective" on the issue, that was one of the things that scared people when electronics moved to integrated circuits. There's no "knowhow to keeping them alive". Once the magic smoke gets let out, they're just dead.
There was a time when every part of a computer - including, literally, an individual "bit" of ram, could be hand-repaired, craftsman-style (and even visually assessed for failure), but we've been on a long, long trajectory towards none of this stuff being user-serviceable.
We're now watching the rise of SoC, wherein my graphics card and my ram crawl into my CPU and disappear as discrete components. Strange times.
I've just replaced a Blackmagic Atem Mini: sadly, I tinkered with my previous one trying to keep it alive and couldn't return it. I burned it up trying to do too much with it: ran the internal processing at super HD from a Pocket Cinema 4K, with two separate other inputs coming in and having to be rescaled, color corrections, and then the output downscaled after chromakeying to HD.
The unit basically melted itself until all it could do was flash bright colorful lines. Total loss.
So the replacement (dropping right into the previous workflow, though I'm making a point to not run as many rescaled inputs by default) will sit on a homebrewed heatsink with thermal pads and fan. That way it probably won't burn up, but if it did I won't have taken it apart, and I can most likely return it for a replacement.
The inability to repair is a concern, but on the other hand building the larger environment into the cooling support the unit should have already had, is the DIY aspect. Should work.
Well, I get your somewhat nostalgic attitude. But back then, computing equipment was also way large, louder, and energy consuming. Heck, one of the "museum" disk drives we have at work used to make the light flicker in the lower floor if you powered it on. So yeah, we're past a certain point of repairability. Thats sad, but more a function of progress, not really something we can do about.
But that doesnt mean all our existing tech will emit the magic smoke in the next few months. The endless need for more computing power is mostly driven by software bloat. We could extend the current smartphone and gadget usefulness by a significant amount of time if we stopped to fall for constant featurism and did lets say, a year or two of cleanup. Sure, "the future" like AR and such will have to wait, but heck, this "we will sell you 3d" thing is around since the 80s and didnt take of in the mainstream yet, so what?
We couldn’t really have achieved this degree of power efficiency on 80’s tech though. Likewise, we couldn’t have gotten to 2020’s tech (always-on wrist-watch displays that last days), without increased power efficiency. Advancements in either one facilitate advancements in the other.
As a fun example, I’ve been simulating core memory at the level of electric/magnetic fields lately. It takes literally trillions of floating point operations to simulate just 1 nanosecond of a single bit of core memory. In doing this, I can identify things like “if we were to arrange the cores like <this>, we could reduce the peak magnitude of the radiated fields when a core flips, thus we could get by with using slightly softer ferrites (which use less energy when switching state) without encountering noise problems.”
But of course, a trillion fp operations was a lot more back then! Having more advanced tech made it substantially easier to optimize the earlier tech. Optimizing compilers are maybe a more direct, but limited, example of this. The faster a CPU can run your code, the more cycles you can have your compiler spend making that code run even faster. I’m not sure there’s a good term for these “mutual feedback loops”, but they seem to pop up a lot.
Aren’t most of the power efficiency gains from integration and miniaturization? If you’re charging/discharging a larger amount per bit, it’s going to take more power than a smaller (physically and electrically) equivalent would. That tends to work against practical field repairs (other than replacing at the IC or subassembly level).
The shorter the distance between two components the less energy you're going to lose to resistance, electromechanical components have physical friction, conversion inefficiencies going from electrical to mechanical energy, and the energy needed to accelerate a mass.
Really, what nukes the repairability of a lot of products these days (ignoring the repair-hostile practices of many companies) is the cost of the item vs the time needed to disassemble the unit, track down where something went wrong, repair it, reassemble, and hope that component didn't blow because something else upstream is faulty that you didn't find. Cuba and Africa the time is cheap while replacement units are somewhere between expensive and unobtainium, USA/Japan/Europe not so much.
If floppy drives were common today I think many of the issues you highlighted like increase power usuage would be solved and applied to this domain. We don't have to be past that power.
Are there any standards bodies or informal organizations that try to quantify software bloat and featurism? Seems like an opportunity for groups that are motivated to keep software lean and consumer friendly to pool some resources ala open source. Anything like that out there already?
> I have enough HW and can live a few years off what is there. I guess people have been trained to get the newest thing every year
I imagine the downvotes were for implying that everyone else is in a similar situation as you. What about people who were being efficient before the shortage, and now can't keep their aged equipment running any longer? What about businesses who suddenly need much more equipment than they did previously, due to pandemic restrictions?
> How is the market for used equipment doing btw?
I've found it much harder to find available supplies at any given time, but I haven't actually seen prices going up as a result.
Have you never had a laptop stolen or a phone destroyed by water damage or a car totaled in an accident? Sometimes you actually need a replacement, and when you do, you need it now because you depend on it for your livelihood. Buying new is not always the mindless plodding on the capitalist treadmill you make it out to be. There are plenty of people who can and should delay purchases right now, but there will always be those who just can't wait.
There's definitely a sort of "keeping up with the Joneses" that goes on in technical fields with regard to hardware. I do all my personal projects on a 2013 model Ultrabook. Sure it's underpowered, but not for editing text, compiling small codebases, or building CRUD WebApps. If I need to do some derp learning, I can just spin up a cloud instance. There are certainly productive tasks that benefit greatly from newer hardware (video editing, compiling the Linux kernel, launching a single instance of the Android Studio emulator), but most of this hardware capacity is probably for gaming and, like many people, I'm not much of a gamer.
Since I am still mostly on a plain terminal, I used to have a time when I used a rpi0w as a laptop, since I really dont need much more then an instance of Emacs and a few ssh sessions in a tmux. I switched to a real laptop by now, and its nice to have a bit of local power. But I could actually do all my dayjob on a rpi0w.
I think the message is true but misses a major point: yes, using existing hardware, we generally have some buffer though the simple fact of delaying replacement or new purchases. Equipment can recirculate (second-hand) and be maintained.
What is being missed are the larger societal needs for new hardware. If you build infrastructure (new public transport line, factory, power plant, communications network, etc) you need new components, lots of them.
The impact of shortages gets projects delayed or cancelled due to lack of availability of components or ballooning costs. It's also not just electronics, it's also shipping and raw materials costs that are impacted.
Personally I have been ranting about the Modern use of Just in time manufacturing for well over a decade. Just in Time manufacturing as it originate in Japan doesn't actually means no buffering. Somewhere a long the line in typical Chinese whisper fashion the true meaning got loss when it go to the west. If you are software developer, just like at Agile and Kanban board.
The problem make worst when CEO and COO dont actually understand this, and force those in supply chain to comply with their view of JIT. It works in a sense when everything is normal. It doesn't work when there is a shock. Look at Apple, the only company that took JIT to a level beyond Toyota. They are doing just fine while others are fighting for parts.
The other thing worth pointing to is that this isn't just chips, but also every other commodity. The reason behind all these different industry are exactly the same. You could have swap the title for Beef, Poultry, Steel, Toilet Paper, Mask, Chips, Milk, Pencil, paper etc... I really do wish people learn a little about how supply chain works. It is the fabric of our daily life and yet very little attention has been paid to it.
It means not carrying needless excess work-in-progress that cannot be delivered to the customer but may very well go stale or never be sold at all.
This is a balance point, not an optimization problem where you're supposed to go to the extreme.
In fact, one of the books that popularized Lean manufacturing, Eli Goldblatt's "The Goal", explicitly makes the point throughout the book that making work centers within a plant locally optimal will cause your overall operation to fail to deliver efficient outputs.
To make global efficiency improvements you have to accept that individual work centers may be inefficient. Rather, just like a developer optimizing their software program, you need to look at your entire organization and find where the actual bottlenecks are, and improve those specific bottlenecks, potentially by running "inefficient" (but non-bottlenecked) work centers to assist the teams that are the current bottleneck.
And then as you change things you'll have to be willing to address a different bottleneck.
Moreover, unavoidable variances (which the book describes in a story about a Boy Scout camping trip) means it will be potentially disastrous to 'optimize' even your critical path, just like you wouldn't want a network router to be consistently near max capacity.
> one of the books that popularized Lean manufacturing, Eli Goldblatt's "The Goal",
Turns out it's Goldratt. Very difficult to read as it is in the form of a melodrama. I suppose the point was to make it light reading, but to me it's a difficult slog through the protagonist's personal life to get to the data I want.
Thanks for the correction. You're right, the point is to make it easier to approach for business minded readers (his son even recently got someone to turn it into a graphic novel). Same tradition later picked up by Unicorn Project and Phoenix Project by Gene Kim.
"Inventory" (ideally), "Overhead" or "Waste". Raw materials can at least be resold in concept, but half-assembled gear (i.e. "work-in-progress") can easily end up in a sort of parallel universe where it's not easily sold without the additional investment of time and effort (either to break it back down into components or finish it for sale) but you have to pay each month to maintain it properly until it can be finished.
It's important to remember that in traditional manufacturing at the time, each work center wasn't graded on its ability to contribute to customer sales, they were graded on things like local efficiency on a per sub-assembly basis. The "customer" wasn't the end user, the "customer" was the plant manager or the PMO or some other stand-in.
The work centers were structured around functional tasks like "heat treat" or "machining" rather than an end user product, so your "high priority" customer product had to make it through multiple independent backlogs before it could be shipped, even in theory. God forbid rework would be needed! Quality issues were the job of customer service to address, not the work center that introduced the defect (as after all, it might be years before the sub assembly finally ends up in a user's hands).
I've studiously avoided Factorio because I don't need to get hooked on any more video games, but yes it's about thinking of your organization's workflow in terms like a critical path (Goldblatt would describe 'critical chain' in a later book), and then discovering where your bottlenecks are.
To be honest I would recommend a separate book for software practitioners, Reinertsen's Principles of Product Development Flow, because it gives you methods to look at which are more applicable to creative work where you can't easily turn your work into "production lines" to then use Lean manufacturing/JIT principles on.
Either way none of this works if you let every local work center just produce produce produce up to its peak output or most efficient point. At that point it becomes impossible to find the bottleneck because there are so many bottlenecks (including bottlenecks not on the critical path). That's a key reason why all the methods (Goldblatt, Reinertsen) focus on reducing the amount of work-in-progress and reducing batch sizes of each individual work package or work item.
Given you can't predict what the shock will be or its consequences, how are you exactly going to create a buffer? Creating broad redundancies would result in lots and lots of waste both damaging to a bottom line and the environment.
Also, is this "fragility" really that big of a problem? Sure, prices are going to move up 20-30% for a year or two until the supply chain issues ease, but is that worth driving up costs for decades preparing for potentially the wrong issue?
Toyota has this figured out. After the 2011 quake they built a system to identify critical, difficult to replace parts in their supply chain and size buffers for those parts. Any they aggregate demand over larger time intervals than other manufacturers. They lasted 18 months into the pandemic shortages before they ran their buffers dry.
We don't really put a price on environmental and societal damage that "just-in-time" does (thinking of truck drivers living on the highway effectively acting as the warehouse, saving the company to build an actual warehouse).
If we did, more robust models might be financially attractive too.
What is the environmental damage from making something not needed? There is a lot of energy in window glass, refined iron,and all the other parts of a car.
You could order adequately, then build and sell those cars or whatever you're making, instead of throwing the parts away. But then, you couldn't change models as swiftly, and need to sell a lot of last year's models.
I've seen every computer part come down in cost over the years even after briefly becoming expensive through my life that recent trends have been odd to me.
Yes, publicly traded commodities do all the time because it means there are temporary inefficiencies in the system that can be arbitraged to turn some profit, until they converge at the “correct” price for some period of time.
But GP is correct, products typically don’t come down in pricing. Consumable goods are more elastic in a free market absent collusion, but finished honest-to-god products have a well-known price stickiness problem. (The price will come down, but rarely back to what it was.) Typically the only way that cycle is broken is when the product itself is obviated/supplanted by a replacement (eg an iPhone will never get cheaper but some new phone may come out that is cheaper and everyone moves to it).
I'm surprised you mentioned iPhones, because that's exactly Apple's strategy. They tend to keep three generations around, and each time they introduce a new iPhone, they knock down the price of the previous generations. Here's [0] an article on them reducing the price of the iPhone 8 by $150 (from $559, so a significant cut of 25%) when they introduced the iPhone 11.
But they tend to do if there's a temporary price shock. Think hard drives after the thailand floods or gpus during and after a cryptomining boom/bust cycle. Or used car prices after the cash for clunkers bill.
> the same things that make a system "efficient" also can often be viewed as making it fragile.
Apart from the Toyota approach, discussed, Dell pioneered a different system: they forced "SLA"s on suppliers for whom Dell was a major customer. This was back in the mail order build-to-order business of the 90s; I don't really know what Dell does these days.
It worked like this: Dell took a truckload of parts. They hit the loading dock; were (in the finance sense "received"); Dell accepted the invoice (to be paid 30 days later or whenever) and the goods appeared on their balance sheet. They used them in computers they built, sent out, and then charged the customer's credit card for, getting paid immediately. When they ran low on part P they called up the vendor and said "deliver us some more right away". What was "right away"? Why pretty much immediately; the vendor had them in a truck outside Dell's facility. If they didn't, well, they stopped being Dell's vendor.
Pretty clever: the contents of those trucks were still on the vendor's balance sheet until Dell asked for them; Dell got to use them and get revenue for them long before they needed to be paid for, but basically not before they were needed. They made so much money on the float, and all at their vendors' expense.
They did the opposite too: if they had inventory ready to go but without a customer (prebuilt configs) they put it in trucks and drove the trucks away from the docs. I don't know how they got their auditors to accept it as GAAP but they did and were lauded by Business Week, the WSJ etc for their cleverness.
While impressive, this is only sustainable in a non-sustainable world. The burden that Dell unnecessary put on their suppliers have to be absorbed by the economy at some point. The single most important point that Toyota makes is stressing the importance of supply chain wide sustainability. Eg. by paying your suppliers ASAP and shortening credit lines, you should be able to get the parts cheaper than what you make of the interest.
How much of a buffer can you possibly maintain to deal with a year long economic disruption, which changes both supply (by lockdowns and factories being offline) and demand (by a shift in consumer spending mix and free government money)?
It's impractical to keep 12 months worth of supplies for manufacturing, even when you don't try to do lean manufacturing - just plain old manufacturing. It's something you need to finance, parts getting obsolete and product designs in some categories iterate every year or two.
We just need to deal with this new reality. You can do your best to avoid the worst case scenario but it is after all a statistical inevitability, so you just ride through it.
As I said on another comment here, though, this is going to have long standing effects on the other end of the disruption. We're going to be left with huge over capacity, unclaimed stocks and companies that couldn't produce parts will have their customers design out their parts for whatever else they can get their hands on. This is a golden opportunity for the smaller players in the industry. How giants like ST, TI and NXP will deal with this long term - I'm not quite sure. Many companies in 2021 and possibly well into 2022 who would normally design their products with TI parts will not do that because they can't even get samples, let along ramp up production.
Toyota (famous for lean manufacturing) built redundancy in to it’s supply chain and only recently has run in to chip shortages. It may be hard, but it’s certainly possible.
0: How Toyota Steered Clear of the Chip Shortage Mess
That article was back in April. Then, in August, "Japan’s largest car maker said Thursday it was cutting production in the country by 40% in September because of a shortage of semiconductors. The company declined to say whether it would shut down plants outside of Japan."
Most of automotive does not need AMSL 10nm and below fabs. They need robust 200nm to 300nm chips.
This may yield car redesigns with less touchscreen dependence, simpler electronics in the essential systems, and an "infotainment" system that's less integrated with the vehicle and can be added or replaced later.
Already, some new cars have shipped with a blank plate in place of the entertainment system.
There's a mindset that electric cars have to be more complicated. This is strange, because managing the batteries and motors is far simpler than managing an IC engine.
>This may yield car redesigns with less touchscreen dependence, simpler electronics in the essential systems, and an "infotainment" system that's less integrated with the vehicle and can be added or replaced later. Already, some new cars have shipped with a blank plate in place of the entertainment system.
I'd personally see that as a major improvement.
The highly integrated infotainment systems are going to make a lot of these vehicles feel obsolete before their time.
The car industry really just needs to standardize a connector (USB-C?) and a protocol for an external device to drive the dashboard touchscreen and any relevant physical buttons. Like Android Auto and Apple CarPlay, but open and universal.
They already have a standard network bus connecting all the things together. There's even a standard debug port that exposes it, somewhere near the driver seat.
Yeah, but can that drive the display? My impression was that it doesn't have the bandwidth for that. Android Auto requires USB-C, although I'm not quite sure what the actual protocol on the wire is (but I wouldn't be surprised if it's the same as for USB-C displays in general, just with some extensions).
Most of the pieces are already there. The only problem is that they're proprietary.
> This is a golden opportunity for the smaller players in the industry. How giants like ST, TI and NXP will deal with this long term - I'm not quite sure. Many companies in 2021 and possibly well into 2022 who would normally design their products with TI parts will not do that because they can't even get samples, let along ramp up production.
I'm currently designing with STM32 parts and bought the last hundred available anywhere in the world, but that's only enough for a field trial, not for production. The only ones left in any quantity at at all are the insanely high pin count BGAs and chip-scale packages... no way.
I looked at TI, NXP and Microchip parts too. There's plenty of older 8-bit microcontrollers available, so if you want an AVR or similar I think you'll be fine because they're on older processes that aren't in as much demand.
If there really is an alternative from a "smaller player", I'm all ears.
MCUs are more of an issue and of course migrating between different families is a huge headache. However the problem extends to other active components too - regulators, interface ICs, sensors, anything really - and there are opportunities to work with the smaller manufacturers there.
Fair enough, although I'm not sure this is true at our scale or our product type. When designing products for a 20 year industrial product life you really don't want to gamble on chips staying in production.
Definitely shortages on regulators as well, although that's outside my area.
> It's impractical to keep 12 months worth of supplies for manufacturing
They didn't need to. The initial disruption was only a few weeks. The problem is that once companies saw that there would be a shortage, they started buying up all the supplies they could to rapidly build up buffers, which increases the demand for those supplies worsening the shortage, which in turn means even larger buffers are required. It's a vicious cycle that all could have been avoided.
I don’t think it would have been without the pandemic, mostly because remote work and education has dramatically increased demand for PCs. There is not a ton of excess capacity especially at the wafer fab level, so PCs running 25% higher than expected was enough to push the industry into shortage. There still would have been a cycle (driven by inventory stocking / restocking), but the shortage would not have been nearly as acute.
The pandemic allowed tens of millions of well-off western office workers to move out of tiny urban flats into more spacious housing. While having to spend a lot of time in their new homes, which was previously in ”public” places (eg restaurants, bars, what have you)
Now, what to do with all that space? A giant tv, washer and dryer, a nice stereo set, new induction stove, the list goes on and on…
PC demand also increased obviously, but probably bitcoin miners caused the most demand for PC hardware components
The automotive industry scaling back and cancelling orders in the beginning of the pandemic and then being shocked when their orders came back, increased even and the chip manufacturers couldn't just step up was a serious factor too.
Toyota actually had a large buffer of chip inventory which allowed them to maintain production longer than most other auto manufacturers. But it wasn't enough and they're finally cutting production. There are limits to buffering.
If every company suddenly decided that they now want to keep a years stock of their important parts rather than a week, the demand just went up 52x right away. It's no surprise even basic parts are sold out for the next year at least.
On a weekly basis that would be a one time 52x increase in incoming orders. On a yearly basis it would be a one time doubling. But that is just semantics.
What is important to discuss is that no one was paying for a extra idling factory before, and now suddenly it is needed for one year to satisfy every customer wanting to keep a one year local buffer.
One also have to take the effect of a one year buffer for the whole supply chain into account. The demand for the manufactured good also increased due to their customers wanting to keep the same one year of buffer, so the one year buffer (based of your previous demand) would be consumed solely by your customers buffer and your orders would have to be larger than one year of inventory for you to keep one year of inventory after the demand spike.
In the end it is easy to imagine that new buffers would lead to a huge scarcity, especially for the longer supply chains.
Fun fact, in the early days of lean manufacturing (of which "just in time" is a part), the practitioners often referred to it as the "fragile production system" for precisely the reasons you mention here. So yes, lean manufacturing is brutally efficient, but incredibly fragile.
It got me thinking that a buffer is really just operational insurance, and the cost is "efficiency". I wonder if it's the same average efficiency. If so, my knee-jerk though is that I much prefer the smooth buffered model as opposed to the spiky unbuffered model.
Unfortunately, large buffers lead to more spiky behaviour than well-kempt ones.
One of the reasons is they hide feedback about downstream capacity problems. (See "bufferbloat" for this in internet terms, Principles of Product Development Flow for it in terms of creative work, and any book on lean manufacturing for it in terms of repetitive work.
Scary part same applies to clouds. People spend a ton of effort making things multi-region and multi-cloud yet there is not enouph spare capacity to absorbs AWS US East going out
No, that's a degenerate fragile microoptimization inspired by seeing the short-term cost savings of just-in-time and going all in on those savings without understanding just-in-time.
What “just-in-time” actually means is dynamically exactly enough buffering as is cost effective given current expectations of forward conditions considering the likelihood of supply irregularities, cost of production interruption, costs of inventory, etc.
Yeah, the original comment is way off. I worked for a major electronics manufacturing company helping with demand driven MRP (DDMRP, MRP=material requirements planning) and obsolescence (DMSMS - diminishing manufacturing sources and material shortages).
I can definitively confirm that just-in-time is about trying to optimize the amount and strategic positioning of buffer needed to support the "right amount of production" to meet anticipated and actual demand, exactly as you describe.
Now, doing that can be rather difficult in the case of black swan events. But the goal is as you describe.
> dynamically exactly enough buffering as is cost effective given current expectations of forward conditions considering the likelihood of supply irregularities, cost of production interruption, costs of inventory, etc.
"Exactly enough" means "no buffer". You are simply mislabelling inventory as buffer. What people needed was buffer beyond what is cost effective given current expectations because something unexpected happened causing improbable supply irregularities.
“Exactly enough for X” means different things based on X.
“Exactly enough for current production” means no buffer.
“Exactly enough buffering as is cost effective given current expectations of forward conditions considering the likelihood of supply irregularities, cost of production interruption, costs of inventory, etc.” does not generally mean “no buffering” unless conditions are expected to be extremely stable, or response time to changing conditions from identification to new deliveries are near instant and there is negligible risk of that changing, or there is negligible cost (including opportunity cost) to production interruption.
No, it is the definition of no buffer. Buffer is the excess beyond enough.
If you expect some supply irregularities and you don't have enough to compensate for them, that is less than enough - a shortage. Having enough for normal operations (which includes forseeable irregularities) is no buffer. A buffer is what you have in excess of that.
not only is there a shock to the system, every (sub)system has had shocks.
I was talking to someone who was rebuilding a home, and the price of plywood went from $8/sheet to > $100/sheet
Another person said that the company that makes 75% of the electrical panels had stopped production.
those are just two pieces of the bigger puzzle, but since many if not all pieces have similar shocks, it would really surprise me if capacity could ever increase to meet demand.
> "Just-in-time" means "no buffer". We have, for decades, been doing as much as possible to make all our economic systems as efficient as possible, which is to say with no extra capacity.
This is true, but I don't see an antidote. The government couldn't just tell businesses to "be inefficient"!
TLDR: it essentially was the pandemic: cancelling orders -> redirecting supply -> too late to uncancel -> low supply + competitive hoarding. Plus a 10 year $150B head start.
Very little margin and too much optimization/efficiency is bad for resilience. Couple that with private equity backed near entire market leverage monopolies/duopolies/oligopolies that control necessary supply and you have trouble.
HBS is even realizing too much optimization/efficiency is a bad thing. The slack/margin is squeezed out and with that, an ability to change vectors quickly. It is the large company/startup agility difference with the added weight of physical/expensive manufacturing.
The High Price of Efficiency, Our Obsession with Efficiency Is Destroying Our Resilience [1]
> Superefficient businesses create the potential for social disorder.
> A superefficient dominant model elevates the risk of catastrophic failure.
> *If a system is highly efficient, odds are that efficient players will game it.*
Over efficiency and supply chain concentration single point(s) of failure (China/Asia with chips especially across Hong Kong and Taiwan that make most of the chips) caused supply chain disruptions, some hoarding is also going on with chips. [2]
The HBA MBA-itis and Chicago style thinking of excessive efficiency made the players that can compete larger and less of them, that takes away resilience and leads the market to gaming. The market has thus been gamed as the players that control it are more powerful in terms of market leverage. When you give up diversification and flexibility you get leverage and added weight to any needed quick changes as the large players attempt to control the market. [3]
US is now building up silicon supply chains again [4][5], it was much bigger in the 90s/early 00s with Intel/Motorola/etc additionally, but business leaders allowed a concentration to happen and it led to market leverage.
Hopefully that same mistake is not made in the future. It will take time to build up diversification of market leverage in terms of chips for availability.
This chip shortage, and all the supply chain problems during the pandemic as well, will hopefully introduce more wisdom and knowledge into business institutions that just because things are ok while being overly super efficient, that is almost a bigger risk than higher prices/costs. Competition is a leverage reducer. Margin is a softer ride even if the profit margins aren't as big.
In capitalism there is a very simple solution for shortages: price increases. People still haven't realized that the solution is just to increase prices until only enough people afford the products, so there won't be shortages anymore.
The point is that artificially keeping prices low does not improve the supply. If anything, it has the opposite effect.
By raising prices on the high demand stuff, you send a signal to the people that can get by with alternative stuff that now is a good time to do so.
So, like, highly hypothetically, if I couldn't find a calculator for a good price, I'm in a good seat to whip out my slide rule. This gives me an edge over the people who have to pay high prices for calculators, and simultaneously smooths out demand. What's good for me is also good for the people who really need calculators.
I still don't see how reducing demand isn't a solution to excess demand. I'm honestly confused and would be happy if you could take the time to explain.
It isn't a solution, it is a rejection of the problem. There isn't such a thing as a 'chip shortage', because that implies some natural number of chips that should exist.
That number isn't a constant, it is a function of supply and demand. Therefore there is no such thing as a chip shortage. We'd maybe like prices to be higher or lower depending on our perspective.
> What happens when the good in question is food?
If there are 10 mouths and 9 meals then there is a food shortage. But that is a biological observation, not an economic one.
> That number isn't a constant, it is a function of supply and demand. Therefore there is no such thing as a chip shortage.
It does take some time for prices to adjust, and if the supply and/or demand curves shift very quickly, there absolutely can be a temporary shortage. I think that's what we're going through now.
We've been seeing 'shortages' since 2016; I remember there being a lot of trouble sourcing a good GPU. This is probably some deeper problem than a short term issue.
It isn't like COVID is a surprise any more, it has been with us for nearly 2 years.
When we start hitting food shortages down the line from climate change induced damage, you will get to witness the alternative: powerful countries confiscating supplies for their rich citizens (at stable prices to stave off rebellion of course).
Prices go up on the food most people prefer to eat, first. People who cannot afford the foods they're accustomed to eating will switch to lower-priced foods they would never have considered otherwise.
That's largely how tomatoes, potatoes, and more came into use... Desperate, hungry peasants.
When food suppliers see the increase in price, they will work hard to squeeze more yield out of their existing supplies, and as soon as possible will invest in producing more (typically for next year). Others who are capable of producing food, but normally produce something else due to economics, will see the equation change in favor of food and switch their output.
Famines are not caused or encouraged by supply/demand economics. It is vastly more common for communist governments to cause them.
I don’t have a problem with supply/demand economics, and certainly the market exerts pressure to economize like you say. I just take issue with the There’s no such thing as a shortage perspective.
Famine is a terrifying word to brandish so flippantly. God help us we don't find ourselves in a JIT situation with our staple crops. A famine would make COVID-19 look like a walk in the park on a Sunday afternoon.
Old article May 5th of this year, which in that industry is a long time.
Nevertheless the situation since did actually get worse as predicted. Not only is it chip shortage we should call it component shortage. A standard 10k 0406 package resistor is sold out around the world. Many high quality, high density capacitor you can't get. The same goes for the standard semiconductor parts like diodes, MOSFET or TVS.
Semiconductor sales rep give you now +50 weeks lead time for the many ICs meaning no real forecast at this point.
They generally tell you 2022 will be tough maybe Q4 will ease up.
Many of the major semi companies are running at fairly high capacity, but supporting elements like IC package build capacity, sometimes materials can't keep up as well.
We looked at some boards to redesign, but when you are short hundreds of parts it gets nearly insurmountable to manage a hardware redesign. Interesting and challenging allocation period, definitely worse than 2003-2005.
Shortage in supply is everywhere. Even in labor market. While walking around I cannot pass by without a desperate plea for staff needed. I see businesses closing down literally because of shortage of staff. It’s as if half of humans suddenly disappeared and other half decided to consume massively.
Truck driver. Besides Walmart (my major shopping store, because you can park a truck up in there), truck stop convenience stores are chronically short of things in the last N months. I know, the irony ...
Unflavored half and half single servings for coffee go early. The bulk half and half dispensers are usually empty these days. (The flavored side is often still available.) Then coffee lids. Then large coffee cups. Gatorade Zero often goes, and then regular Gatorade. The Power Aide seems to always be available. The open cooler boat with yogurt and sandwiches often runs low or empty.
Vicious cycle of staff shortages everywhere, thus unable to source, produce, deliver and stock at former rates.
Echoing /u/mrkstu, Casey's across the Midwest is generally not in stock of the medium cup for soft drinks, which is normally available in styrofoam or plastic.
Turns out there is no shortage of people wanting to work. But there is a shortage on people wanting to be a slave of their job for a misery as compensation. So restaurants here keep complaining how come nobody wants to work with them.
EDIT: I'm not joking, the salaries for mostly any position in a bar or a restaurant in Spain tend to be a joke with long hours and shameful conditions. Paying in cash to avoid taxes is a common thing. Doing extra hours without extra compensation as law requires. You don't like it? Ok bye, you're fired, and we'll complain to the news reporter on TV, about how nobody wants to work any more nowadays.
In most cases I would agree but right now I am seeing actual shortages. In just web development, it seems like this year every single company decided is the time to expand. And when every company decided to expand at the same time, there is a shortage of workers to fill those roles since you can't just pull senior developers from thin air or unis.
I was making a comfortable wage previously but this year most of my coworkers have left for other jobs and I'm on the way out too since the current job listings are close to double what I make currently. An eye watering amount of money is on the table right now.
But paying more is a zero sum game. To gain developers you have to be draining them from some other company and it will take years for new supply to come in.
It is a shortage because there are more open positions for senior developers than there are senior developers here.
Paying more won't bring senior developers out of the void because there do not exist enough to fill all positions. If all companies expand at the same time they can not possibly all fill their positions no matter what they pay.
I'm talking about the whole of Australia right now (remote but country based). I can see job listings have all exploded in listed salary and every company I talk to is looking to expand.
It's virtually impossible to move the other way right now due to covid. So there can be a genuine shortage as it is not possible to pay to pull people from other countries.
Labor shortages weren’t a very big thing a month before the pandemic, so you have to ask yourself what changed. In many cases it’s that people got comfortable being paid a decent amount to not work, from the pandemic payouts. It’s a really interesting time to be looking at the economics of everything and see how this will play out over the next 12-18 months.
Perhaps it’s only that, for the first time, they got some time to reflect on their job and actually notice the impact it had on their lives and health, realised it had never been worth it, and want to find something else?
I think it is closer to that. A lot of people were complacent in their jobs, and not nearly tapping their potential (market or otherwise). I can't tell you how many people I know or heard 2nd hand got furloughed or laid off, then landed a significantly better paying gig and never went back (sometimes wildly different industries). If they were never forced to change, I guarantee most would have stayed.
> In many cases it’s that people got comfortable being paid a decent amount to not work
Americans got a paltry few thousand dollars. So you certainly agree that if a single payment amounts to more than their shit job then that shit job isn't paying enough.
one thing that changed that often gets overlooked is that covid killed off 648k people so far in the US. that's a lot of people. I would fully expect there to be a labour shortage after approximately the population of las vegas is killed off.
and i know that a lot of the deaths have been in older people who may have already been retired, but they are often the childcare providers who enable others to work.
The other half doesn't need to consume massively, just at the same rates they always were. When every step along your just in time supply chain is running at 50% capacity, the end output is even lower than 50% capacity.
Come on, it's obvious what the cause is. We meddled with and shut down large parts of the world economy. What hubris wouldn't expect there to be problems of rubberbanding supply and demand when we so deigned to restart them?
This was a problem that was forecasted by many heterodox shutdown detractors. These "shortages" are caused by our politicians and bureaucrats making terrible decisions unilaterally.
This is downstream effects of housing crisis. Rents have finally hit a point where even slaving away at minimum wage is not enough. Why live and work in a place where even after full time you will only further go in debt. How about just move to cheaper area or do go in debt but might as well do something less stressful. Either way financial situation is gonna be same.
Both halves are consuming massively thanks to pandemic relief funds and bonus unemployment. Why do you think there’s a labor shortage? You can get paid more money to sit on your ass and order things on Amazon.
Reasonable speculation, but if you look into it you'll see that right now the most popular hypothesis is a lack of fit between job seeking and hiring. Warehouse jobs for example are taking a lot of workers away from other industries right now.
The most convincing piece of evidence for this IMO is that if you compare the states that ended relief funds and bonus unemployment early to try to combat this issue with those that let it keep going, there was no statistically significant difference between the two halves of the USA in terms of hiring shortages.
The whole United States, at least to those who qualify or have chosen to game the system. Federal boosted unemployment funding is finally ending this week, assuming it doesn't get yet another last-minute extension. PPP loans are still being drawn down and forgiven too.
Is the shortage of common components (such as the 10k 0406 resistor you mention) due to: increased demand that can't be met due to unavailable capacity or because the manufacturers scaled production down during the past year and now there is no capacity to scale it back up? Or because supply chain shortages that feed the fabs (raw materials)?
With consolidation, some of the less common component values are being discontinued. I would imagine the same may be happening with standard values in larger sizes and 0406 is kinda big these days.
Longer term this shutting out of companies not on the leading edge will result in other changes. Components fabricated integral to the PCB being one such change.
I only saw it in a presentation from a high end board house. And then only for limited use and probably wide tolerances? But the idea is still in my head and makes some sense. Imagine something like a silk screen laying down passives. No, they didnt say how they do it, that was just my thought.
I get the impression looking at the electronics industry from the outside over decades, that techniques seem to be moving from "level to level".
That is, you now see single silicon chips made like entire clusters of computers, with on-board network connections between surprisingly independent cores and modules. It's almost like a bunch of silicon chips all within one monoilithic silicon circuit board, with no external wiring.
Then, several of these chips are in turn mounted on a physical board that is "just" another silicon wafer, possibly made on a very old node size, something still measured in microns. But nonetheless, it's a full silicon chip in its own right, with high-tech chips riding on top of it like circuit elements.
And then the motherboards seem to be miniaturising too. I remember when designers hesitated to use chips with much more than 20 pins because it would need to many traces that were too fine. Now? Thousands of pins are the norm for mainstream processors. Soon? Maybe tens of thousands, or even hundreds, and I wouldn't be surprised if circuit components start getting printed directly into the motherboards, like you said you saw.
I remember going into a library and picking up an old edition of Scientific American, from the 1960s I think. It had a full-page ad from Texas Instruments advertising their record-setting integration, because they could pack ten (10!) transistors into a single chip. They had an "actual size" picture in the ad, so you could see for yourself and count all ten transistors. The metal layer wires looked thicker than I've seen on some small PCBs these days, which should blow your mind...
Passives are actually quite quick to produce. I think it's a good proof that a few giant distributors in China keep buying, and flushing them down the drain.
> Passives are actually quite quick to produce. I think it's a good proof that a few giant distributors in China keep buying, and flushing them down the drain.
What are these immense amounts of passives used for?
I can answer that in the abstract. I don't think that's what's happening, I don't have any reason to think that. Just answering in a vacuum.
Imagine you sell widgets for $200 a pop. Widgets are hard to manufacture, production is limited.
Someone else has 10k widgets that they want to sell for $75 per.
If you buy all the stock, people will be forced to buy yours, because of supply constraint.
200-75 = 125 is more income than if you price-matched and sold at 75.
Now why destroy them, and not just sell them back?
Well, they're not necessarily identical to the widgets you make. Maybe they have the other brand's logo stamped on them. Maybe the datasheet doesn't quite match.
Either way, your customers might notice. If you just buy and burn, no one downstream of you sees anything, except for higher prices
Note also that the above sounds immoral enough to be illegal in a whole lot of countries, so I would advise you not do that, though I'm neither lawyer nor spiritual advisor.
It's crazy for a near-commodity with ample supply, but if there's a supply shortage and (in the short-term) inelastic demand - because it's a relatively cheap thing that's required for a much larger product - then it may easily be economically effective to restrict supply instead of filling it.
E.g. if you can make 100k widgets for all of which you have customers at a dollar each, then in a supply shortage with limited alternatives you may find that if you credibly cut supply to 80k widgets, so that some customers will actually be left out, then they might be willing to pay two dollars per widget (so, 160k instead of 100k for that toy example), so you get more money for less goods.
They reserve supply in enormous quantity to box out competitors, as you would expect from the top dog. Perhaps they even reserve more than they need sometimes. But have not seen that they would do it just to deliberately destroy.
I don't think there is one quite so nefarious, the tale I usually hear is that Apple buys out the manufacturing capacity, say, on 8GB ram chips - and then its competitors can't find 8GB chips for their own phones, and have to offer a meager 6GB of RAM instead. (If you think about it, doubling your RAM just so manufacturers don't have the capacity to supply your competitors is pretty ruthless and genius)
And it's not like Apple waltzes in and just stuffs all the RAM that's lying around into a shopping cart. One of their uses of their cash reserves is to pre-commit to buying the components they need far in advance.
In principle, it seems to me that this shouldn't be causing component shortages — on the contrary, it should reduce planning uncertainty for component manufacturers.
The fact that some companies have contracts for reoccurring scheduled delivers of parts means when the SHTF it's going to be really bad for those that don't.
If Apple has a contract to buy 10% of a manufacturers capacitors. But then material shortages drops the manufacturers capacity in half. Apple's share is now 20% and anyone without a standing order is SOL.
The situation would be the same if Apple did not have a contract, since they can afford to pay a higher. The contract lets manufacturers more easily invest in future capacity.
No, without the contract Apple would have to pay more per part. The contracts are x for $y. They do have clauses for price increases so Apple might be paying more, but not what they would pay without the long term contract.
I don't have insight into Apple ,justthis is how contracts work in general
My comment was to note that Apple would still end up with 20% of the production since they can afford a higher price. So the same situation in terms of allocation with or without a contract, in contrast to what I thought Gibbon1’s comment implies.
Intel doesn't make 10nm chips that you put in a car or bulldozer or telecom box that has to bake in desert heat. Some of their older process stuff, yes, but not the top.
It's likely for low resistance and temperature stable precision current measurement so the package design can contribute to stability. Vishay is a leader in that stuff.
This is an interesting article, but it doesn't answer why we have this shortage. Presumably without COVID, companies projected what demand would be for years out and could plan manufacturing capacities.
What about COVID caused the shortage? Is it shipping speeds? Change in demand? Less efficiency down the supply chain due to health requirements? Trade changes?
From my understanding companies projected chip needs months and years in advance but a few things happened.
1. Companies cut their orders expecting less demand and lost their place in line for orders.
2. Because of the rush to work remote so suddenly people needed everything from new computers for kids, monitors and accessories for a home office and in general a lot of stuff that used chips.
3. Because of Covid there were shutdowns at semiconductor manufacturers and even disruptions due to fire and weather.
Suddenly the entire supply chain went into shock, something that had been perfectly balanced between supply and demand was full of uncertainty. Now we have a situation where everyone is begging for chips and they cannot be built fast enough. Essentially the companies are behind on production and the demand is so high they cannot meet it. Everyone keeps predicting an end but it seems that manufacturers are still falling even further behind than they were before.
Additionally when companies cut their orders other companies some fabs and other pipeline facilities were shutdown for retooling.
Most of the companies that cut their orders were auto and industrials that projected a prolonged downturn and didn't want to carry the inventory on their books. They predominantly use the components manufactured on less advanced process nodes.
Thus when demand for new process nodes was increasing and old ones were decreasing at the peak of the pandemic it made sense for pipelines to retool to meet that demand instead.
On top of that US-China tensions means SMIC and Hua Hong Semi are now banned for many US producers which is further constraining supply. Also creating asymmetry where China is experiencing lesser a supply crunch. However there is also trouble brewing there due to Trump forcing the Dutch to block export of ASMLs new EUV lithography tech to China.
This means they aren't able to start building fabs w/EUV to fill the coming supply gap there either but instead are now having to pour billions into their own lithography progam. Long term this will be good for the market (but very bad for ASML most likely) but short term its not great.
Why don't semiconductor manufacturers just cut old orders and refund their customers, who would go on and refund their customers? It seems like the financial hit from cancelling old orders and resetting production like this would be what insurance is for, and you'd no longer be fighting a backlog but just taking in orders as they come like you've always done.
>What about COVID caused the shortage? Is it shipping speeds? Change in demand? Less efficiency down the supply chain due to health requirements? Trade changes?
Take a look at "the beer game". One of the lessons from that game is that a single change in consumer demand causes a wide variety of supply chain disruptions. This lesson was learnt in the 1950s and unfortunately it looks like they stopped teaching this in mba schools, so we have to learn it the hard way all over again.
> "...unfortunately it looks like they stopped teaching this in mba schools..."
why do you believe this? we worked through that simulation as part of my mba program some years ago, and mba friends at other schools said they did the same. the bullwhip effect is such a core concept to operations management that it's hard to believe it wouldn't continue to be taught, though perhaps with an updated simulation demonstrating its effects.
Wow, Factorio is an extremely complicated version of the beer game. It's especially obvious visually for my hacked-together systems when I get to the flying drone stage and the drones exhibit the movement patterns like those on the charts in your link. I hate that game.
yeah, factorio does feel like it models this pretty well doesn't it? especially with drones, because they are more flexible about where they go, so the bullwhip can more easily spread to physically-adjacent supply-chains.
if you're not over-supplied, when a new spike in demand starts a whip through your factory, things can go nuts for quite a while. you get power surges. drones that were behaving beautifully before start flying in chaotic patterns due to under/over supply and making a bunch of dumb local decisions about storage boxes. stuff grinds to a halt for no apparent reason, only to discover that some random component has become a bottleneck for practically everything, and then it can quickly shift to something on the opposite side of your factory...
in that sense, perhaps the biters help keep your factory more resistant. you have to deal with occasional component-losses and spikes in weapon demands, and if you don't handle them well enough you can get into an awful persistent struggling state.
It shouldn't be? Maybe I'm missing something, but shortages in factorio normally just flow through the chain without any amplification. There's no planner deciding to order extra to make up for last time or deciding to cut down excess inventory when there's less demand. Or to put it another way, all the buffers in a factorio factory stay the same size and the demand is always just trying to fill the buffer. This whiplash effect is caused by dynamic buffer sizes.
Depends how you play. If you plan out your factory ahead of time with a ratio calculator and only build what you need, then a shortage just causes a corresponding shortage in the output that goes away when the input is fixed.
A lot of Factorio newbies play with a "Spot bottleneck. Increase production. Repeat" mindset, though. The bullwhip effect absolutely shows up here, particularly if you're sharing intermediate products on a main bus. You start constructing purple circuits; that means your red circuit production is no longer adequate and you've starved blue science. You make more red circuit factories; now you're not constructing enough green circuits, and your whole factory has ground to a halt, but the red circuit buffer is filling up. You increase green circuit production; now you're starving the rest of your factory of iron and your steel production is lagging, dropping purple science production.
That doesn't whiplash. They all run at normal speed, no bursts in demand getting bigger and bigger, then suddenly becoming negative demand, then jumping back up.
It's just some parts can't keep up and everything after goes as fast as it can with the supplies it gets, running pretty steadily. It's what you want to happen when supplies are low, for the most part.
Yeah, it definitely shouldn't. But that's me, hacking together whatever I could to see that unsatisfying rocket launch and finally get the game out of my head. So I'd build something new and supply it from a steel conveyor that had a supply buffer, under estimating how much it would actually use. Then it would eat too much and cause a shortage, which meant other components somewhere else overflowed through lack of use, and overall progress would slow down until I ramped up steel production some more. That sort of thing.
So, ideally, everything would go smooth. And plenty of the time for me it did! But a tiny flaw could also be magnified through the system and suddenly I can't produce yellow research bulbs because I was two steel furnaces short of actual demand needed to for servo arms, and production would have peaks and valleys until the additional supply smoothed things out.
I had those same problems until I started using the Factorio calculator[1]. It took some time for it to be developed, and I only just found out about it earlier this year. So I understand where you are coming from.
That depends. Either way, what I'm describing are my own observations in Factorio are those instances where shortages actually do end up causing bullwhips.
And shortages are not always easy to correct: A "mere bottleneck" is what we have in the chip shortage. Production capacity is the bottleneck, and correcting it is complicated.
That is what I also observed in Factorio: A small increase in demand that exceeds supply creates a bottleneck because production can't keep up. It propagates throughout the system causing other shortages where the components 4 or 5 layers down the chain need the scarce material. And also causes surpluses on multiple levels because components can't be used to make the components that also need the scarce material.
Fixing the shortage isn't always easy. If you need a lot of a specific material to build the production capacity to fill the shortage, that may reveal yet another bottleneck in producing another material, and so on.
IRL, it's possible something like this could occur with the increased uptick in fab production. ASML might be a bottleneck in providing drastically more equipment for more fabs. Or someone up the supply chain that feeds them resources could be bottlenecked. I guess we'll see how it plays out though.
Insightful link, thanks! I knew this info intuitively but didn't know about the beer game, really interesting! Nice to have a little more theory to back up my understanding.
It's a little dumb that basically all advanced chip manufacturing in the world happens within a thousand mile radius of Beijing and we're over here an ocean over wondering where our supply went. More energy needs to be focused on made-in-USA and less on the thing we have no control over
There is one important factor that nobody in the popular press is mentioning. Few analysts in the financial industry are starting to point it out.
And that is that semiconductor manufacturing has been chronically and almost criminally underinvested. The coronavirus pandemic was just a triggering event that brought the whole house of cards down.
The thinking in the financial world for the past 10-20 years has been that software is the hot, growing license to print money industry and actually manufacturing chips is a dirty, competitive, low margin, low return on capital industry best left to the Asians. And of the actual chip manufacturers the ones to be most favored were the ones making digital chips and the ones making analog and power semiconductors were the dirtiest most disliked commodity like companies.
Even within the semiconductor world, the financial world heaped money on fabless semiconductor companies like stock market darling Nvidia and did not give much respect to the companies that owned the fabs and made the chips. And even less respect was paid to the ones that made power and analog chips.
Guess where the biggest shortage is now. In power and analog. Also, in packaging, another field that was considered unsexy, low margin and too competitive to bother with.
Everybody talked about how software is eating the world and heaped massive amounts of capital on software companies but they kind of neglected the important role of semiconductors. And even when they talked about semiconductors they talked about the "important" ones, the "brains" of the computer, the cpus and the graphics chips.
So while software companies (especially SAAS ones) would have license to lose large amounts of money and still have receive endless streams of capital, companies that manufacture chips would be judged severely by the market on profitability and free cash flow. This is especially true for analog and power semi manufacturers.
Free cash flow is a very dangerous metric for a growing industry. Capital spending gets subtracted from cash flow to arrive at free cash flow, which means that if a CEO is judged by free cash flow, he will try to minimize capital spending. That of course will hurt future growth.
I have been investing in analog and power semiconductor companies for many years now, and the CEOs of all these companies knew that a jump in demand was coming. But they all wanted to keep their jobs so they all talked about how they will grow their revenues by minimizing capital outflows etc. If they were cheered by the market for having cash outflows of the type many SAAS darlings have, we would be swimming in semiconductors right now.
For example, if some one wants to do some further reading, check out old investor presentations of Texas Instruments. By old I mean a couple of years ago, before coronavirus. Probably the largest power and analog semiconductor manufacturer in the world. Their entire sales pitch to investors was about free cash flow and capital return to shareholders. That is great for many types of shareholders, but this is not something a high growth company should be doing. Needless to say now they are swamped by demand and do not have production capacity anywhere near to what demand is.
There is another peculiar quality of semiconductors. They are easy to store and they do not go bad after taking some minimal precautions in storage. This means that any rumor of a shortage and price increases causes everybody that uses semis to go out and buy out everything the suppliers will let them buy.
So there is a very unstable situation. A- there is a shortage and B - even the possibility of a shortage causes hoarding behavior that makes the shortage worse.
That already unstable situation existed before coronavirus. And then coronavirus came. And it triggered a heightened demand for semiconductors. And that triggered a snowball effect of hoarding and higher prices, more hoarding, etc.
The solution is that prices should go up (already happening). Stock prices of semiconductor manufactures should go up. And here I mean actual manufacturers, not fabless chip designers. This should happen especially for analog and power semiconductors where the biggest demand and the biggest underinvestment lies. This has not happened as much as it needs to. Higher stock prices should cause more money to go to the fab business and that should eventually result in the necessary growth in semiconductor manufacturing.
All of this will take time. And meanwhile there will be a lot of hoarding. There may also be occasional panics where the hoarders dump a lot of hoarded inventory thinking prices are going down, cause prices to go down and later discover that prices are shooting back up again after their inventory gets used up.
But in the end hopefully we should end up in a world where (1) there is much more investment in semiconductors and (2) we eventually get all the benefits of the wealthy, high production, environmentally sustainable, all electric future we have been dreaming of.
The problem is that stock prices are not going up. TSMC actually tanked (I lost money) mainly because various governments pledging for domestic production which is probably still 5 years down the line. This only further aggregates the shortage.
TSMC all-time high is 632 and is trading now at 620. I don't know about tanked or how you lost money (unless you put a x10 leverage when the stock was at ATH)
Prices at resellers are going up of course, but there's a reason why TI, On Semi, Nexperia, etc might prefer to stock out at a low price than to have product available at a high price: they still want to be selected by cost-sensitive design engineers who are designing products that will be on the market well after the shortage is over.
By old I mean a couple of years ago, before coronavirus
So much from before seems old and foreign. Definitely one of those events that marks a definitive before & after... and during, since we still don't quite know how or when things will settles out to a new normal.
It wasn't COVID that caused the shortages, although it did exacerbate them. We've experienced first disruptions during previous administration trade wars with China around 2016-17. They never really went away since.
First, all the predictions went sideways. Consumers needed, not just wanted, more cars and equipment to function as PPE. As their mobile personal protective transit bubbles. As their remote-work conversation devices. As new types of hardware that weren't being used for those economic functions before. Even as webcams and the like.
Second, the quarantines combine with unemployment that frees an abused workforce from abusive conditions. Instead of being wage slaves forced to take any job what-so-ever workers are able to demand jobs that value safety protocols, pay well enough to live in an area (which is also extra insane right now), and have reasonable hours (enough, the right times etc).
Third, (rent) given a combination of working from home, closed 'entertainment' venues, and general quality of life concerns many have decided to flee the high rents of cities for suburban and even rural areas where their money goes farther. This causes more desire to purchase and more wasteful food packaging and production at home as well. Everywhere around me prices have gone up on eating out something like 25% since the start of the pandemic; my wages have not. So instead of eating small portions of bulk shipped and prepared ingredients I'm now eating at home more, mostly frozen stuff. The price of rent didn't quite increase that much but housing is also insane right now, just like even used cars, due to the supply crunch.
Supply correction is how we as a culture, country, and world escape this problem. Increasing supply in all of the problem areas is the only way out.
Supply means buying power goes up, irrespective of wages. Those misguided attempts to raise everyone's minimum wage (tax the middle class, since the elite who own the rentable units will just raise their rates) would work far better by decreasing the cost of a better life. It could also be targeted towards desired resources, growing out of undesired patterns and standards which is also opportunity for better energy efficiency.
That doesn't solve the issue, because the high rental rates are caused by limited supply due to NIMBYism. If you raise taxes on rental income rental rates will just go up and pass on costs to tenants. Rentals are highly inelastic.
The real solution is to densify and build way, way more. Why doesn't SF look like China with skyscrapers on every block?
Well not sure about SF but Silicon Valley was partly designed low density to keep out Blacks and low-income people. That was a selling point; that's how 1945-1980 suburban American system worked and Santa Clara County was the absolute worst. By the time people were maybe relatively less racist the developers and zoning boards had done the damage.
Edit: it was legal or pseudo-legal until 1964 for developers to subtly (or not so subtly) advertise that their new developments would not be seeded with non-whites. It was a selling point.
Rental rates should always be set at the highest that the renters can afford according to the supply/demand curve. If you raise taxes and rental rates go up, that means they were leaving money on the table previously.
If the rental rates were set correctly, none of the extra taxes can be passed on. The most likely result is the price of the buildings will go down and new renters will be able to buy in more cheaply but get less per rental.
Once again the Land Value Tax shows its brilliance.
Land Value Tax says you tax land at a (high) flat rate per area regardless of value to encourage density. That is entirely different from higher property taxes on landlords (which would not lower rents but might lower property values).
I agree that LVT would make a huge improvement if it was set high enough though. But it will never happen when you have homeowners leveraged up 20x on govt subsidized mortgages who vote for themselves.
The article is long on speculation, but really doesn't have even basic data such as inventory or shipped component levels to do the the most basic verification of the theories.
Are we shipping more chips than even, only demand is allocating unevenly?
Are covid + multiple factory incidents at fault?
Even the cost the stay at the cutting edge facts seem a little suspect for at least some chip markets. e.g. embedded chips such as for automotive use are generally are multiple generations behind fab technology - and really don't need to stay on a cutting edge fab to keep profitably producing for many industrial embedded components.
- Blame Toyota with their Just In Time inventory management ideas.
- Factory closures, lockdowns, etc.
- Increased demand for certain items. Everyone's at home surfing the web, playing games on their computers, mining bitcoin?
- Monetary policies are creating excess (see bitcoin mining) demand.
- Shipping delays. Ports running at reduced capacity. Ships sitting at sea waiting to get access. Suez Canal blocked. Shipping companies have no spare capacity. Blame Toyota.
- Building new factories takes time, getting/building new machinery takes time, everything was set up to not have any over capacity (since that's loss) ... blame Toyota again.
- Big companies swallowed everyone so we don't have as much diversity of products as we used to. Probably related to easy money as well.
tl;dr Toyota, Greenspan, Covid ... Somewhat in that order.
EDIT: Should probably add the trade war between the US and China to that list.
> But the Tohoku earthquake’s aftermath pushed Toyota to increase flexibility, and the value of inventory Toyota carries has almost doubled since 2011. Speaking at a briefing in February, Toyota Chief Financial Officer Kenta Kon said as part of the company’s business continuity plans, it keeps as many as four months of stock for some crucial components such as chips. Toyota didn’t expect the semiconductor shortage to disrupt production in the near term, he said.
Yes. I know that. They were one of the early ones to discover the limitations of their original ideas but that did not make its way to the rest of the industry.
Toyota may have been the people who pioneered Just in Time, but there are also one of the few car manufacturers that actually had a stock pile of components. They realized that supply chain disruptions are too damaging not to have a cache.
Saying blame Toyota is funny, but really all the execs who say a cost savings and took without consideration of supply chain problems are actually to blame.
Exactly. I was trying to be funny. Not literally blaming Toyota. Though it is their fault in some sense, the popularization of "lean" "just in time" manufacturing as a magic bullet that gives you everything while saving costs.
That's just not true. Most people are happy to have Netflix and other streaming apps integrated directly in their TV without having to acquire an external device.
If it were modular, it would be more manageable. Put the "smarts" in a form factor reminiscent of a Raspberry Pi Compute Module, and put it behind a door like a decent laptop's RAM sockets.
This also offers more product flexibility from the same basic chassis. You can offer a "dumb" module for security-centric "cannot have unapproved devices on the network" markets, or a module that has specialized software or connectivity (i. e. a built-in KVM-over-IP client, or something bidirectional for hotels with internal cable and pay-per-view technologies)
Aside from the consumer benefit of "You can just snap in a new $100 module in 3 years and it will be snappy and up to date instead of scrapping a $600 set", this could be a good way to manage the ATSC 3 rollout. You can sell a 4K set today without an ATSC3 tuner, then the customer can add it as an "updated smart module" for $100 later once the format is live in their market.
This sounds exactly how TVs worked before they became "smart", except instead of some module we had a standardized plug and you could attach what ever you wanted to the tv.
In my case, I have an HD tv from 2006, it ain't smart, but it does tune to what ever port I want by default. I've got roku on there now, but in the past I've had cable boxes, consoles, even some rabbit ears.
Isn't this basically a digibox (set-top box?) though? The modern ones basically just run android and use AndroidTV. Costs maybe $50 and connects to a TV. Presumably you could even use some other android device to do the same thing.
I figured the selling point is that it's sleek and fits inside the footprint of the set, rather than having to be cabled externally or dangle off a HDMI port like a Chromecast or Roku.
Netflix is cool, but stuff like SambaTV and showing ads right in the main menu are reasons to keep the smart TV away from both Internet and my home network. Chromecasts and Apple TVs are at least a bit less obvious about it.
I thought I would hate them but accepted my fate and got one last year after our not so smart TV died after 18 years. I had no bad experiences so far: very smooth ride.
Yes, that's mostly why I did not want one. But unless we go open everywhere it gives us all a worse experience and anxiety to think about. First of all we need phones to change as those are, well in my opinion, worse than TVs. But I agree: why not a smart TV with fully FOSS Linux? Using the browser and vlc and such, it would be quite perfect for most people if packaged right.
I think it's in the middle. Even my wife isn't a fan of built in apps compared to a discrete external device.
I'd like to see more power efficient dumb tvs. My favorite TV is a 65" led tv that sips power. At the same time, I've read that manufacturers providing data on their customers helps subsidize lower TV prices, and in that case adding apps likely doesn't incur that much more cost.
Pretty much every TV sold the last 10 years has some junky netflix app. Yet we sill see a large market for rokus, fire sticks, apple TVs, and even game systems relegated to streaming service use, all because smart TVs have been so awful. That market could evaporate overnight if TV manufacturers suddenly gave a crap.
For a car, my guess is it's impossible to comply with modern emissions and safety regulations without ICs, not to mention the redesigning and retooling that would be required. I would suspect that these regulations have also basically evolved symbiotically with the industry to be a most for them, and they would have no interest in pushing to allow less complex cars.
For TVs and for cars, there are big revenue streams associated with things chips do, like serving ads and tracking what you do with your car, and well as making it impossible to maintain without a dealer. Companies would be very reluctant to shut this down.
None of this helps consumers, but I think that's where we are. Time to look at getting an old motorcycle maybe
I'd love a dumb EV. You need controllers for the charging of lithium ion batteries and for brushless or three phase ac motors, but even those could be asics or microcontrollers disconnected from the rest of the system. Switches, no touchscreens.
The shortage also applies to low tech complements such as resistors.
Probably a healthy ability to reuse, repurpose, repair and recycle can be quite effective against the inability to just buy a new thing (regardless of whether the new thing is hi or lo tech)
I'm looking at the supplier websites and trivial parts like 10 cent microcontrollers have over a year wait times. This is why I have been waiting for months to have the dryer I ordered and there is still no delivery date set.
I would have no issues going out and buying the latest macbook right now but basic other products are missing.
The simple things with chips in them don't (or shouldn't) have the expected growth of computing power that makes parts obsolete.
Like, if a washing machine or a car's fuel injector needs some chips, we should be able to reuse chips from older washing machines and older fuel injectors which we generally throw out because of a mechanical failure with all their electronics intact, but currently it seems that all these chips get just discarded, no one cares about them.
It would seem exceedingly difficult to reuse chips at scale. The software and pcb is deigned around one microcontroller in particular and it is not trivial to swap them out with others.
So if I design my washing machine around a particular chip, I could order a million of them for 5 cent each, or I would have to:
* somehow identify other models which have the chip
* find 1 million of them about to be discarded
* rip them open, desolder, collect these parts
* discard the rest of the machine
* do this for about 10 cents per washing machine
combined with the fact that washing machines usually last a decade and will be dying out over a time period spread out over 10-20 years after manufacturing time. How on earth would it be possible to pull off and by that time, these chips will be very obsolete / impossible to order new ones.
Well, you're assuming that the manufacturers would get involved with specific disassemblies, the realistic process would be that the current process of mass recycling random home appliances (which is mandatory at least where I live) when these appliances are ripped open to recycle e.g. metals would also include desoldering the electronic parts, sorting and testing them and putting them back into circulation. We don't do that, probably because it's far too expensive because disassembly is less automated than assembly, but a genuine shortage of core components (the set of which would not/should not change over 10-20 years) could be helped this way.
A big assumption that I'm making is that you would be extract the exact same components from many different devices, so you can build a washing machine with a component supply that's part new components and part 'reused' from all kinds of older washing machines, fridges or cars or smart toasters or whatever. This IMHO is true for generic electronic components (which are quite cheap but still add up in volume) but might not hold for chips - however, my initial argument probably was that if "chips-for-not-so-powerful-appliances" currently are structured to not be interchangeable, then perhaps they should be.
The cost and difficulty of this task is monumental and the rewards are so small. These chips literally cost cents and outside of global disasters, they are quite easy to buy in the millions. And it isn't just micro controllers that are out of stock basic passive components are missing as well. It would cost an absolute fortune to strip all of these parts off older machines. No one wants to pay $6000 for a washing machine made of scrap parts which may fail and are long since obsolete so they can no longer be reordered.
I’m a proponent of that.
Just rewriting old java or python monsters in an efficient language like Rust would easily give us an order of magnitude better efficiency.
A special class of theorem provers could be developed, proving that a program runs below a certain level of spacetime complexity.
This would entail a great increase in energy efficiency.
I also endorse holy information warfare against inefficient proof of work cryptocurrencies and a transition to efficient proof of stake.
I like Rust, but gosh it produces the second-biggest bloated binaries I've ever seen. (Yes, it's mostly people using it wrong, though apparently I'm one of them.) The only thing worse is C++. (Again, probably people using that wrong, but that doesn't mean it doesn't happen.) Java and Python, by comparison, are tolerable, even when people use them wrong; when Java programs are huge, that's usually because of a mass of hideous “business logic” rather than a billion dependencies.
It's not releasing that's the problem; it's developing. Development is much harder with the release profile, because stuff like overflow checks are disabled.
But yes, I have tried my own custom debug profile that turns on the optimisations to try to get the size down. The final binary is smaller, but `cargo build` still regularly leaves me with just kilobytes of space remaining, and then fails outright until I `cargo clean` and try again (which I think is build script related).
The JVM is 114MiB on my machine. A near-minimal ggez program in debug mode is about 100MiB,¹ and ggez is small for a Rust application library. When you start getting into the 300s of dependencies (i.e. every time I've ever got beyond a trivial desktop application), you're lucky if your release build is less than 100MiB.
Sure, I could probably halve that by forking every dependency so they aren't duplicating versions, but that's a lot of work. (It's a shame Rust doesn't let you do conditional compilation based on dependency versions, or this would be a lot easier. As it is, we have to resort to the Semver trick: https://github.com/dtolnay/semver-trick/ — not that many people do that, so it's functionally useless.)
Take GanttProject as an example. It's 20.6MiB of files, plus the JVM. I challenge any of you to make a Rust version (with accessibility support in the GUI) that can open (something resembling) its XML files and draw some (vague graphical Proof of Concept) representation on the screen (with editable text fields), in less than 114+21=135 MiB of binary. And then tell me how, because I've been trying to do that kind of thing for over a year.
¹: I can get it down to around 8MiB with release mode, lto etc., but that significantly increases the build time and only about halves the weight of the intermediate build files.
I've never managed to get a Go program to compile, so I couldn't tell you. I was referring to C++ – though in fairness to the compiler, I had to brute-force myself through that source code too.
For binary size you could use my hypothetical theorem prover ensuring a binary size below a certain point.
That would come at some kind of compile time or efficiency cost since you couldn’t anymore optimise for those but I’m sure that’s something one could opt for.
I don’t think binary size matters though.
Storage is very cheap these days.
People keep saying that. I have 1.8 GiB available for all my build files, and that's only because I keep deleting the build files of my other projects (meaning they have to be recompiled whenever I go back to them). Storage matters for me.
At least the prices are almost normal again, now that Chia's over.
Couldn’t you get 4x that much from a $3 usb stick?
You could also have self modifying code such that the size of the binary automatically changes as needed.
If you ship a lisp interpreter instead of rust, you can have the interpreter recode itself and any lisp files to a smaller size.
You’d just implement a compression algorithm that preserves the functionality of the code compressed.
I think you could do that with rust too with a self compiling binary. It’ll require some real technical skill but if you hire a real hacker you can pull it off.
My cousin his solution for the competitive programming contest had something like that.
Do people still know about upx these days? However, I think executable compression is besides the point of OP. Around 18 years ago, I was going through the codebase of a program I maintained as a Debian package, and as a udeb for the installer. Back then, I was trying to make it small enough to fit on the first floppy. I learnt about unnecessarily large datatypes in structs, packing, padding and alignment, and that adding "static" to module-local functions and data can really do things to the binary size. These times are over. Nobody cares about binary sizes anymore, the main argument against doing so is "we cant be bothered, we need to innovate."
> It’ll require some real technical skill but if you hire a real hacker you can pull it off.
I do have a project a bit like that, but I'm not using Rust for it. I was trying to make my own language (like Rust, but more powerful and also smaller), but I'm probably just going to use a modified (safer) C.
If I were to write it in Rust, I'd have to compile it in the first place… and if I could do that easily, I would simply use Rust.
Ooops. 48GB here, regretting that I didn't get 64GB. (My excuse is that I needed it for osm2pgrouting which ate up to 90GB of paged memory on a country-sized input file. That hurt a lot.)
I have never been gladder that I abandoned my OSM data-processing project before I got that far. (I was planning on processing multiple country-sized input files for their road networks – on a friend's computer, but it only had 16GB, so that would be a lot of paging.)
I then jumped ship to OSRM, which did the same job in ~3GB of memory, AND several times faster, AND didn't return spurious routing results afterwards. (I was getting some routes with "hyperspace jumps" between points in the road network that weren't connected, and due to previous issues with memory I just decided to stop trying, so I didn't investigate the cause of those jumps.) So if you'll ever decide to do that again, just use OSRM. It "just works" (at least in my experience).
> I’m a proponent of that. Just rewriting old java or python monsters in an efficient language like Rust would easily give us an order of magnitude better efficiency.
Do not rewrite software in Rust, because Rust is not an efficient language. It's a memory, and RAM hog, and it's unstable, with major breakers every release. Switching to Rust is not a thing for a profit seeking enterprise.
In practice, it's an n-fold downgrade from good C on performance.
Source? I was under the impression that Rust took an aggressive approach to back compat.
And I'm also not a proponent of rewriting in Rust for it's own sake, but the other commenter was suggesting it in place of Python, which would probably be a huge net gain in efficiency.
Possibly with the caveat that the unstable versions of rust are going to be more prone to breakage (I mean, obviously, but it is a way to hit issues). It was my impression that most people didn't need unstable anymore, though.
Anecdotally, I find that parts that used to be in stock are simply bought up en masse by speculators, and then are resold at a huge markup. So you will routinely find that all the authorized distributors are zeroed out and all the unauthorized distributors are selling for 2x-100x the price.
I'm not sure when this became a thing, but it's very real to me. It's a snowball effect; these folks get better and better at buying everything up, and then get all the capital to continue and expand the process. I have stock notifications everywhere and everything is gone as soon as it is posted.
Regarding the article - it is definitely not all about fancy fabs that can't be built - these are analog and power parts! And microcontroller fabs are fancier, but usually many many nodes behind state of the art. Maybe it is a capital thing, but every fab I've ever encountered is more than busy. Sure, you lose money trying to do the hardest thing and failing, but the problem is so widespread across so many non-edge products that it must be explained by other factors.
It's consolidation of manufacturing. Many products and product lines have exactly 1 manufacturer today. No competition. No reason not to just run existing fabs, forego investing in more capacity and make bank while prices skyrocket.
Huh, this made me wonder, is there a reverse auction marketplace of sorts, where a buyer places orders and all the sellers provide a small or big batch, or fill the order completely.
Seems like it would be a great addition to any existing marketplace.
Not only that, but most of those microcontrollers are single-source, which amplifies any supply chain issues because you have to wait for your specific MCU to become available; there are no 'drop in' replacements any longer.
Sometimes you can get MCU with the same pinout but missing certain features. Can be a way out if your product does not need those. Unfortunately I find any possible replacements are gone too...
You'd think that would be a positive? Presumably only a very tiny fraction of those are suitable for automotive applications e.g. also presumably some of those were only made in smaller quantities trying to hook some big customer.
I think we've always had a pretty large variety of microcontrollers, just something like an 8051 was available in hundreds of options from various companies. Microchip has always had lots of different ones with slightly different options. Sometimes (often?) those are the same die in a different package with some different build time options enabled.
I'm not following you how on that would be a positive (in a chip shortage situation).
Say you use 50 microcontrollers in a car. Probably 15+ separate part numbers? One or two of those can't be acquired, so boards need to be redesigned and the production line is held up.
Now, imagine a different world with wide cross-licensing between manufacturers and instead of doing hundreds of separate variants slighly cost optimized for various purposes for each major microcontroller design, let's just have a small number of kitchen sink variants.
If you're one car company or building one car design you try to minimize the number of different components in your design. However, different cars companies might (and do) choose different MCUs based on their architecture requirements. Some car vendors use CAN bus, so they'd pick a microcontroller with CAN. Some use Ethernet. They have tools and software the generally is build for certain families. An ECU is not going to have the same CPU as the radio or the entertainment system, it just doesn't make sense.
The diversity should mean that when Audi picked some specific microcontroller for their ECU there's not too many others that are using the same part at volume. So this should isolate them to some degree from fluctuation in demand. If there's only one vendor/part, like we're seeing with GPUs, then nobody can get any, period. Even if Audi can't get their CPU for some reason then GM who uses a different CPU might be able to get it. Well, unless it's the same factor and the fabrication is the bottleneck...
It's a trade-off though, since every one of those controllers will probably be less efficient and more power hungry then more specific parts would have been
I agree that it's a tradeoff, but not in power efficency. It's easy to shut off parts that are not being used.
It's primarily a tradeoff between a) increased chip size, b) number of external chip connections, c) cost, d) probability that having a small number of footprints/designs will help.
So, in short: cost vs winnings from having a lean library of component.
You would have thought that the whole point of microcontrollers was to decrease the number of different chips that have to be manufactured on the basis of them being programmable.
I've used microcontrollers extensively so I'm not just randomly rambling ;)
Here's examples of differences:
- Power consumption (some applications care less, some care more).
- Number of pins. Sometimes you just need one input and one output, shoving a high pin count device in there doesn't make sense.
- Pin functions. Higher current capability or other specialized functions like A/Ds D/As etc.
- On-board peripherals. Some microcontrollers have on-board motor controllers, networking, specialized counter hardware, real time clocks, DSP blocks etc.
- Integrated memory and types of memory.
- Oscillator options/frequencies.
Add to that all the different CPU architectures and vendors.
It's more like the microcontroller replaces a bunch of chips or custom chips in certain applications. It's less like you can build one microcontroller to rule them all. Those are still generally mass produced devices so there's really not a lot of economy of scale, i.e. the one micro to do everything (if that was even possible, which it is not) would just be more expensive and less optimized.
I do wonder how much of this is exacerbated by hoarding. Similar to the Great Toilet Paper Shortage, where fear of continued shortages caused panic buying which caused extended shortfall of supply. And that was in an industry where production was not constrained and warehouses were full!
If we can find availability of any of dozens of parts we are short, CEO orders are to buy out the inventory.
Hoarding is a component of the shortage, and may be the both trigger and cause for extending the problem from months to years. Semi companies are buying new equipment at record levels, but even the equipment companies are thrashing to find chips to control their equipment.
Analysts may want to keep an eye on balance sheet inventories at electronics manufacturers like Jabil, Flextronix, Fabrinet and their ilk to get an idea of the extent of hoarding in the market.
I have to imagine there is quite a lot of hoarding going on, but by a different name. Normally I buy components for the next 6 months of expected use. TI sent me an email a couple weeks ago asking me to place orders for the next 18 months of use to guarantee delivery. So there I go placing orders for 3x what I normally buy. But it isn't hoarding, it is guaranteeing allotment.
The part brokers have locked in their quantities months ago, also increasing demand. What are you going to do.
You can buy any chip that is not available from legitimate sources in China in pretty much any quantities. I wonder at this point if this is not a part of hybrid warfare.
If you check the inventory of a company like TI, it seems as if their manufacturing went offline overnight. You can not find TI parts anymore on Digikey. Same for ST. This is unthinkable. Only some companies that are a bit more specialized and upmarket like Analog Devices still have some stuff left.
I wonder what would be the long lasting effect for these companies where entire products will get redesigned around what's available - many times these product designs live for 5-10 years. How many companies had to get TI parts out of their products? How's that going to affect TI revenue in few years time?
What worries me is that a shortage economy could become the new normal. If you're selling 100% of production and can't fill all the orders, profits are great. What kept companies from running in that mode was fear of competition. If the "free market" no longer generates new entrants, why expand and risk overcapacity and losses?
Basic supply and demand economics. Low supply means high prices but it also means low unit sales, and this is rarely the most profitable state. Once supply recovers, and it will recover, supply will increase.
Hey, this "supply and demand" curve picture on Wikipedia is tiny, anyone got a freely licensed higher resolution one they can upload?
Free money throws a wrench in that because you’ll have parties willing to spend drastically more than a product is worth (be it due to some combination of one or more of free government money, lack of common sense, or obscene wealth) giving suppliers an opportunity to raise prices beyond what they normally could to compensate for the lost profit margins due to low unit sales.
It's not only chips.It's everything.Timber,iron,steel,building materials, certain types of food.
It's the increased consumption of these goods because of insane lockdowns all around the world.People did not spend money on holidays but decided to upgrade their homes and electronics.
I am honestly tired of this. Yes we had a pandemic but we didn't have a war that killed and destroyed the very thing that produces the things that we are short of right now.
Or is it the ongoing & economically unjustified printing of multiple trillions of USD since 2020, which forces firms globally to move away from money or get ripped off?
Why are lockdowns insane for a novel, highly transmittable and deadly and mutating virus for which there was no vaccine and although there is now it is not rolled out enough globally yet.
Many consumers fuel this demand by sheer convenience. I personally know of many PC users who order a new laptop/machine, when all they require is a SSD upgrade to their hard-drive. Unfortunately they are not tech-savvy to do a reinstall of the OS. For them it is simply more convenient to order a newer machine. In some extreme cases, all they need is an external drive to store their data and to physically wipe down of their computer to remove the dirt/crumbs/grime from 2 years of usage.
> Are there certain chips that are plentiful that we could port a lot of software over to?
I sense a potential misunderstanding here: Not every chip runs code. Microcontrollers and processors do, but there are many different kind of chips. Some convert voltages, some massage signals, some drive motors, etc.
Why is this important? Porting code will only help you when you are swapping microcontrollers or processors. Which is just a tinny sliver of all the ICs in use. Porting code won't get you anywhere if what you are missing from your BOM is a boost-buck IC, or an opamp.
I would guess that the GP means designing such that your board accepts multiple footprints for compatible parts, so that if your first choice of op amp (or voltage regulator, etc) goes out of stock you can populate another one.
That might be a similar part by another company, or it might be an identical part in a different package (often ICs are available in two or three package shapes, and you might not be able to predict which ones will be in stock).
It basically means order sufficient parts before starting layout or even before finalizing the schematic. It's not just microcontrollers that are scarce, it's also "dumb" ICs like power management, discrete logic, A/D interface, etc.
Wasn't Wozniak an absolute wizard at minimizing chip count on circuit boards? What used to be a lost art is turning into one of the hottest skills around.
If you make a product with low margins it is much better for you to under-produce rather than over-produce.
I suspect this makes it somewhat hard to restart the economy as there isn't any surplus of a lot of materials. And since downstream consumers are blocked, they fail to generate the demand needed to justify bringing up production.
I find it fascinating that through all this, rarely if ever is it mentioned through traditional media channels that the reason in large part for all of this might just be because of our over-reliance on China and other countries when it comes to manufacturing.
How beautiful would it be if chips, and many other things, were built here in the US? In addition to the myriad of obvious benefits, another would be not having to wait over a month or more between a chip “rolling off the line” and finally making the month or more long trip from overseas.
Sure, most news articles regarding this issue will mention offshore production in passing - its impossible not to. But very few if any are actually making that a focal point of the article.
With enough time I assume they will. People will get upset if they suddenly see stores selling the same product for double the price. So the stores take the loss (or really just not taking the potential profit) and then let scalpers resell them at market value while offloading all of the hate on to faceless ebay sellers.
Then they do what they always do and pass the bill to the consumer. Its 2021, what are you not going to buy a computer and cell phone? Technology like this is a utility now.
I think most phone purchases might not be people purchasing a phone because they need one, but people purchasing a newer phone because it's cool. And while people certainly pour crazy amount of money in those gadgets, I'm sure there is a price limit when they are not buying a new one.
Ask any of your peers why they upgrade phones. 9/10 will tell you their old one was too old and slow to run modern software or the battery was toast, and that happens in just a few years for some phones. My roomate will probably be upgrading their iphone X soon, not because they want a new iphone, but because its starting to do weird things like not be able to take screenshots and is overall far slower than when he first got the device. Same story with the 6s the X replaced years ago. New phones haven't been cool for a long time, we've been buying the exact same slab of glass for like a decade.
I’ve always wondered, why are silicon wafers a circle when you’re stamping out rectangles from it? It seems so wasteful. I’m sure there’s got to be a reason…
The silicon is grown in a process that produces long cylinders that are then sliced into wafers. Silicon is not molded, cast or extruded, so the shape is result of the growing process.
Okay, so you need a $20 billion factory to put out 256-core behemoths that power servers with terabytes of RAM.
But if you're GM, you don't need the latest Xeon. You need a 100 MHz single-core microcontroller, that normally sells for $2 to a hobbyist ordering a single chip, but probably something like $0.50 if you're GM and you order a million.
GM would probably happily pay $20 for that microcontroller if it's holding up a $40,000 truck.
If those microcontrollers cost 0.1% of what a Xeon costs, shouldn't that mean, to a first order of magnitude, the factory that produces a 100 MHz microcontroller should cost 0.1% of what the factory costs to produce a Xeon?
How slow / expensive would it be if you wanted to set up a factory to pump out microcontrollers for GM trucks instead of Xeons for Amazon servers?
Those old and cheap chips seem to be hit the hardest. I can easily buy an M1 macbook today but buying a washing machine or a basic microcontroller is sold out for an undetermined time.
On a related note, ETH 2.0 is coming around the corner, which is going to result in a flood of used GPUs to the market (hopefully driving prices down). This will happen sometime in the next two years I believe.
This explains the shortage in very fast chips, but why do cars etc. have a chip shortage? Cant they use slower chips with fewer transistors that hopefully can be manufactured more easily by more companies?
Cars already use larger slower lithography sizes, which are more resilient to temperature changes, but also can be lower margin profit, since they take up more space on a standard silicon wafer. Older fabs, when spun down, may not just sit there, and may have their hardware cannibalized or at least the space repurposed for something profitable. So the old machines usually can’t just be turned on. There’s also a people-problem where that specific equipment may be understood by a select group of people and they may not be available.
There are also problems surrounding qualifications of parts and the supply chain restrictions that suppliers are held to by manufactures. So, trying to replace even just one part on a design that is locked down can be quite painful for a company. We are having many types of parts shortages at this time though, which exacerbates that experience.
Even so, a much “slower part” usually won’t have the same feature set, and may not have the same memory peripherals pinned out, requiring further qualification. Hardware changes of that level are usually done 2-3 years in advance for all of the paperwork and qualification steps. You’d have to build today-cars with standardized qualified parts from years ago. An outrageous example might be trying to stick a cassette-tape head-unit in your latest 2022 car.
We've been unable to get USRPs -- and we need a few dozen for a project. We're starting to look at chinese "clone" versions to see if they're any good. National Instruments has no idea when they can ship again, and no supplier has any:
Perfect timing, for once. In 2019, I jumped onto the Sonos train, got rid of a bunch of old hardware, (finally) replaced my desktop with a ThinkPad X1. In early 2020 (before the first lockdown) I completely redid my little "home office", investing in 4 combinable tables, a new 8-channel USB soundcard, a MIDI controller, and some more stuff I should always have had... Two weeks after I managed to clean out the apartment from old tables and stuff, lockdown came. In summer 2020, I finally got the long-overdue replacement for one of our HPC clusters at work. Neitzer at work nor at home do I plan to invest any money into hardware in the next two years. Sure, it might sound easy from where I am to say that, but I find this period of the world a perfect time to do some consume consolidation. Looking back, I surely spent a bit too much on hardware over the last 20 years. Thats fine in the first few years of your career, because after all, those are not just toys, they actually help you acquire knolwedge. However, after a time, it somehow becomes silly. If I am true to myself, there is really nothing I need right now. And I guess that applies to more people then just me.
"factories are more advanced and cost over $20 billion each."
"Once you spend all that money building giant facilities, they become obsolete in five years or less. To avoid losing money, chipmakers must generate $3 billion in profit from each plan"
I implore the national security state and the intelligence community to take action and intensify this trend.
Thus the advance of artificial intelligence will be slowed and we will have more time to practice AI regulation and control.
Only then may strong AI be permitted to eventuate.
Otherwise it may destroy us.
He who develops AI would be punished severely were we to live in a just world.
We need a butlerian jihad against AI, led by the national security state, equipped by the brave men and women of Anduril incorporated.
The damage to the rest of the economy is unfortunate but a cost I am willing to accept.
Now what do we actually need that hardware for, aside from training language models or mining bitcoin?
Why don’t we also develop more efficient software?
Why don’t we pay people to do so?
Many of you in this thread could work on this, and be paid handsomely for it.
Per the orthogonality thesis (Bostrom) I do not even expect AI to have similar moral values to ours.
I expect it to instrumentally secure power to better pursue it’s goals, and in the process thereof establish hegemony over us and other AIs.
From there it could easily try to kill us. I don’t want any of that.
My preferred outcome seems to be what Elon Musk wants, human cyborgs amplified with brain computer interfaces and control over machines; never a machine with control over many humans or posthumans.
The main threat model is a Yudkowskian rapid takeoff where artificial general intelligence recursively self improves far beyond human intelligence, making it like a god with absolute power compared to us. This corresponds to the agentic model of artificial general intelligence put forth by Bostrom.
Yudkowskian AGI might fail in the long term since it would run into Gödelian problems it would never be able to solve but infinitely loop on.
That humans are immune to those and can behold contradictions testifies to a Penrosian supra-turing-machinic aspect to the human mind.
Man has a transcendent aspect even if we may not be meta-cognitively aware of it at all times.
AGIs could possibly be destroyed through the use of such problems, akin to magical spells wielded by powerful magi.
A secondary threat model of mine is based on a world of non-general, non-agentic strong AI with many discrete “tool AIs” with superhuman ability in one unique area.
The secondary model is mostly harmless except for military AI. The genius of Andúril Incorporated lies in their creation of drones designed to destroy other drones.
This future is still very dangerous, and I hope that the security state will take action to prevent it’s arrival.