There’s also weirdness with the drivers and hdmi, I think around encryption mainly. But if you only have DP and include an adapter, it’s suddenly “not my problem” from the perspective of Intel.
Which was delayed 2 years. I’m speculating this was supposed to be mostly or exclusively this but they needed a computer now. Or needed to spend the budget now.
There’s something especially bad about radiological burns. Not necessarily knowing severe damage is being done, we don’t have a feedback loop to even know we should get away. And beyond the metaphysical and psychological aspects for me, they just look wrong.
For the purposes of this article, this graph is much more effective at making the point than the log scale one. I think it would have been a better choice to use a graph like this.
For the purposes of what this post is communicating, I don't think the exact sizes of adobe prior to 2000 or the exact size of sumatrapdf matters at all.
The linear graph instantly communicates:
- sumatrapdf has barely changed size in the same time that adobe's size has grown exponentially
- adobe's crazy growth spike started ~6 years ago
Maybe I'm just dumb, but I didn't realize the graph had a log y-axis at first. Then, once I realized that, I had to spend a bit of time parsing the graph to figure out what it was saying (I don't work with log graphs often at all). And once that was done, the only thing I came away with was "wow, adobe grew a hell of a lot when sumatra didnt", which is the same thing the linear graph told me instantly.
Being able to see that sumatras size remains relatively flat while adobes size growth is practically vertical is all the granularity I care about at a glance. If I want to know exact sizes, I'll dive in deeper.
I think this is an argument for the log scale. I'd argue that the things you say it communicates are not actually correct.
Adobe's size has been growing exponentially pretty much this whole time. The rate increased slightly in the mid-2010s. SumatraPDF started out that way too, but managed to level out after about a decade.
Relative size is what matters here. That increase from ~2.5MB to ~5MB in the mid-90s was pretty significant for the time. In terms of the impact on users, it's probably at least as important if not more so than going from 300MB to 600MB 25-30 years later.
I disagree, am with qualeed on this one. I don’t think the size doubling means much at all except raising the question of why did it double? What was added that I care about? My instinct tells me nothing so it’s shouldn’t really be acceptable except this is par for the course these days. Nobody cares about bandwidth it’s just assumed to be fast and unlimited by nearly every publisher of software.
In the 90s that jump cost me in terms of modem time. I couldn’t download anything else for an extra 30-60 minutes that day (if I remember my speeds correctly). Today, extra 300mb costs me less than a minute and I can easily continue multitasking in the process.
Imagine there had been a 50MB jump in 1998. That would be a major WTF moment. Now imagine a 50MB jump in 2025. We'd barely notice.
Saying Adobe's crazy growth spike started six years ago is just pointing to the knee in the exponential curve. It's had pretty much the same curve since version 1.0. And SumatraPDF had the same exponential growth for quite a while.
If absolute numbers are what matters and an extra 300MB is not important, then why not scale the Y axis to 1TB and squash everything to the bottom?
Thank you! That graph is so much clearer than the one in the article. You can see at a glance the relative size of the two programs, which the logarithmic scale does a terrible job showing.
I'm a technical audience and the logarithmic scale is meaningless to me. So I don't even agree that it's a good choice for a technical audience. It may be a good choice for people who already are used to reading logarithmic scales, but that can't be a particularly large group as it's very rare to see a graph like that.
I bought a threadripper pro system out of desperation, trying to get secondhand PCIe 80G A100s to run locally. The huge rebar allocations confused/crashed every Intel/AMD system I had access to.
I think the Xeon systems should have worked and that it was actually a motherboard bios issue, but I had seen a photo of it running in a threadripper and prayed I wasn’t digging an even deeper hole.
I've been tempted, but had a hard time finding a case where I needed more than the 9950x but less than a single socket epyc. Especially since the epyc motherboards are cheaper, CPUs are cheaper, and the Epycs have 3x the memory bandwidth of the thread ripper and 1.5x the memory bandwidth of the Threadripper pro.
I agree I feel like it’s just blatantly funneling me into those dubious buy this font sites. I have somewhat better success with http://www.identifont.com/ usually
I don’t think the proposed font is correct either, I’m not even sure the concept of font works for that example though. Mainly the arches on the m are wrong, too arch like and whereas the example is more teardrop.
Almost all “normal” damage is and should be repaired online with zfs. Those offline repairs meant the hardware controller had no idea how to interact with filesystem directly, probably for the best. Level of abstraction purists don’t like this aspect of zfs.
If something particularly bad happened, or you tried being really “clever”, you can get into a rare situation of not being able to import the pool or have it import in read only mode. There are tools to help repair that kind of metadata damage. Then proceed with the normal online repair if needed.
But I don’t think css can leverage the gpu in most (any?) cases. Apple has almost certainly baked something into the silicon to help handle the ui.