This comment is so interesting because clearly, it's the unvisited color that's broken, you know this, and you even typed it out, but it is so common for visited links to be darker than unvisited links that it makes you assume the opposite.
I'm hoping they mean the prerelease implementation was only creating leaks due to bugs that have been fixed, so a machine that runs the release implementation for the same amount of time wouldn't see such behavior.
This is also important for being able to show normal size text on smaller phones. I've got a 5.8" screen and basically every app is visually broken, with about 10% functionally broken as well. Every web or app designer should get an iPhone Mini or similar, crank the font size accessibility setting, and make sure everything works. In particular, any text that is truncated needs to have a line-wrapped version available somewhere, every page with content needs to be scrollable, and the input box needs to be functional (e.g. it must show at least one line) when the keyboard is out.
On web, use `overflow-wrap: break-word` and make sure your header can shrink.
Just noticed that yesterday. Almost made me give up on logging in and posting. The day old reddit is gone will probably be the day I can't tolerate it any longer.
I already avoid Reddit search results in Google because the site is crap, unless there's no other option (and then I force it to Old Reddit), just like I avoid the sites full of popup ads.
Without users writing new content (because the site is crap) the Google stream will dry up too.
Bots will generate content on Reddit using LLMs for years to come. It will be an uncanny valley full of apparent UGC and interactions. People who still use search will be pointed there and say, see, you don't need to use these new-fangled chatbots! But they'll be looking at pre-generated chatbot output. The bots will get stuck in a reinforcement loop where reddit content generated by bots is used to train the LLMs for the next generation of bot, and "reddit recommendations" will devolve in old wives' tales, stuck in the past, the echo of the voice of a generation long gone, repeated by mindless zombies.
Their latest update on my iPhone 15 Pro (so the most vanilla configuration) has some kind of reverse padding on posts that will drag them under the previous one.
Sure they do. You're just not the target audience. You're stuck in the A/B testing phase where the A side gets Advertisements and the B side gets the B-rated website.
If you are using Reddit's web interface: this is deliberate. They seem to make effort to make it so annoying that people might be nagged into installing the app. I assume it works on enough people that it is worth it to them to loose people like me who walk away without installing the app.
As someone who has worked in compliance testing for tightly controlled software platforms, things like this piss me off. These problems have known solutions.
I still use the Galaxy S8 with a 5.8" screen as well. It is actually quite amusing that nowadays this is considered "Small phone". I find it to be the perfect size and refuse to carry a brick around with me. Good thing in my country we still have 2G for when this breaks :P
In medium to big enterprise you usually ask someone to give you the resolutions that's you're supposed to support.
These re resolutions are usually significantly higher then the iPhone mini. usually The product owner or UX/design make that decision because you need to make a cut somewhere, and almost nobody uses small phones anymore. So they're making the judgement call that these people aren't worth the money creating the design, testing that every flow works correctly etc.
It's definitely annoying to be outside of the target demographic however, I know the feeling well.
- Their intent isn't small resolution, they're discussing increasing font size beyond the default on a standard premium smart phone[1]
- Retina displays came out when I was still in college...2010? At that point, resolution is meaningless, things like dp (Android parlance)/pts (iOS parlance)/points (Adobe or font parlance) rem are what you have to hang your hat on
- if you're making "someone else" (?) tell you what resolutions to support, are they technical enough to understand that?
- The invocation of "medium to big enterprise" is carrying a lot of weight, the big enterprises I've worked at certainly didn't do this, but it was Google
I think this is something more depressing that I saw constantly through the eyes of someone who started at SmallCo then went to Google: designers didn't know enough about view layout to explain this, engineers didn't care enough to explain because it was a "design thing", and if you were an engineer who cared enough, you were seen as troublesome / sticking your nose in the wrong place by your fellow engineers.
This isn't an idle observation: by sticking my nose in the wrong place continually, I learned enough about design to make a new dynamic design system that no one cared about until VPs needed one, and then it got in on the branding for Material You/Material 3.
[1] iPhone Mini is 5.4", the post you're replying to is recommending 5.8", that's a pretty de rigeur smart phone screen, even for premium smart phones in high income countries
I used the term resolution as a stand-in for how the question is asked, as I thought everyone here would understand it easier like that.
Ofc the person isn't asked which pixel density, pixel ratio, resolution etc should be supported - they're asked what the smallest device is they should support. And this is usually the iPhone, and raising issues if a design doesn't work on a iPhone mini gets reprioritized into the backlog until someone closes it as won't fix.
And yes, I'd wager these multinational giga corporations like MAMA (Microsoft, Apple, Meta, Alphabet) are very different in culture, but I can't speak from experience, Ive never applied to work for any of them
I believe my nation classifies small enterprise to be <50 employees, with large starting at 250. That's a very different organisation structure then you get in a corporation with tens of thousands of employees, spanning multiple nations.
5.8" is "smaller" screen? So what will you tell about my 4.5" BQ Aquaris E4.5 Ubuntu Edition, which I use for daily web browsing? 5.8" is huge, it cannot fit in the pocket!
My designer will do that, and is pretty good. The problem is my product team, who have very little technical background. They look at the designs are start messing with everything. I have to constantly remind them of AA compliance. They flat out don't get it.
They also don't understand that people will visit your web site with their phone, even tho we have a native app.
I am one of those users who regularly use website over installing an app, even for some things I use daily. I think things that can be a website should be a website =)
I use an iPhone SE (4.7") with slightly larger default text size (accessibility settings) and have the same experience. Everything more or less "works" though, so it's not as bad as it could be --- the visual issues basically just help me spend less time on my phone.
I don't think I've ever seen a website respect my phone's text size, and frankly I didn't know it was posisble, this Airbnb blog post is cool and makes me want to update my own sites.
I remember having a heated discussion with an LLM about this. For some reason "web" doesn't even consider to respect dpi and angular sizes and instead relies on "breakpoints", which "is important to remember to test on all sorts of devices".
That's just bs. As a maker of UI, I want to get a rectangle, put my controls on it with the sizes in cm/in/° as I see fit and then to just slide right-bottom to see what happens on different screen sizes. One doesn't have to buy a foot-high stack of smartphones and tablets to test an effing label.
The whole issue stems from the fact we can't measure things correctly, cause the whole measurement system bases on ideas from winword era.
> For some reason "web" doesn't even consider to respect dpi and angular sizes
Maybe I'm not sure what you mean, but this is not correct. "The web" is definitely DPI-independent. Specifying a width of 16px will render 32 physical pixels on a @2x display.
What's stopping you from using cm/in as your unit (which is actually also what px is based off of in the web, not physical pixels)? ° doesn't make sense until you pick a viewing distance, at which point you're really back to some scaled value of cm/in.
Fixed size layout runs into problems where you're working with displays that aren't at typical desktop / laptop / mobile distances from the retina.
The two obvious examples which come to mind are mounted displays (from wall art and signage to billboards), and glasses-based displays.
A 1cm font size on a mobile device is ludicrously large. On outdoor signage it's invisible, and quite possibly below pixel size. It might be appropriate on a desktop display for some major title or site branding. It's going to fill the entire field of view on a face-mounted display.
The simple truth is that design has to respect not only device capabilities (your colour-palette likely works poorly on monochrome e-ink devices, ask me how I know), and reader capabilities (colourblind? glaucoma? cateracts? presbyopia?, macular degeneration?) but also the specific use case and environment, all in ways that the author of a site or design often has no possible insight on. Client specifies design is the only option which works in all of these instances, and yes, this means that the more complex your site / SPA (o hai ubrz) the more likely it will be to break irreparably in a large number of instances.
Not gonna rewrite or patch a whole CSS framework to make a dashboard. One can avoid using it in the first place, but then has to cope with elusive y-scroll in line inputs, misalignments and so on. There's always something strange out of box even if you target a specific browser.
Which CSS framework out of curiosity? I don't know many that won't let you use px, which is 1/96th an inch at any DPI, and if it really doesn't let you use the most basic sizing element on the web then the problem seems more with the chosen framework than anything to do with browsers :p.
> cm/in as your unit (which is actually also what px is based off of in the web, not physical pixels)? ° doesn't make sense until
Nope!
A CSS pixel is 96 DPI at 28 inches from the viewer.
px is fundamentally angle-based. 1/47 of a degree. And sure it's a "scaled value of cm" at some point but this way the scaling is more obviously handled on a per-device basis.
Endlessly-replagiarised blog posts about Bootstrap and friends will talk about breakpoints as though they're the greatest thing since the printing press. Outside the hypey framework world, they were only ever in vogue for long enough for us to realise that they didn't really work that well. By that time, we had flexbox, and grid followed shortly after.
Nobody who knows their salt has been using breakpoints (as a first resort for the past decade). There is no point arguing with a predictive text model about it.
The blog post design/airbnb site are pretty reliant on @container breakpoints. Even with flex/grid layouts you usually want to change things up significantly when it gets down to single column.
Why? Reading order is page order, so it should just work.
Though, @container breakpoints are at least justifiable. Back when people keyed everything off viewport width (the approach that I'm sure the computer was regurgitating – and the approach used by every version of Bootstrap since 2), things were very fragile.
It's easier to see if you play with it on the actual site rather than look at the images and try to deduce what could be different. It's not really related to things like order or positioning of boxes, that's an extremely easy problem to solve. Some of it is loading progressively space saving assets e.g. the logo goes from "${icon} airbnb" to "${icon}" to "" depending on how much space there is, the search bar simplifies so you have space to actually type, and some content border elements are removed so there is more room for the actual content e.g. rather than blindly always show content boxes with rounded borders, spacing, and other attributes you can gain a lot of space back on small displays by stretching the container's content to fill that part of the screen completely. This is particularly useful if you combine this with getting rid of certain UI elements from earlier.
HN also does what's described at the end using media queries - if you make the page small your topbar fills the entire top and changes element layout while the post content area fills the entire rest of the screen.
And if you make your page slightly bigger than the threshold for triggering HN's "mobile view", there's still padding in the top bar but the page is 53 pixels too wide, and you have to scroll horizontally. Hacker News is an example of why we don't use media queries to hack in a responsive layout. Websites are responsive by default, unless you're using <table> layouts (as HN does) or the <div>pocalypse (as every other website seems to, these days). Building different versions of your site for different widths is not a solution.
The more subtle things you describe seem quite sensible. I'll probably steal those ideas, if it comes up.
> Building different versions of your site for different widths is not a solution.
I think about it as building different sites for different form factor. What you do on a phone is different from what you do on a desktop. Or a tablet. I don't like using the amazon website on mobile because it's so cluttered, when what I want is usually searching for a product, or checking my cart for a product. I'm not managing my account in there, nor do I want alternative recommendations.
What you do on a phone is different from what you do on a desktop. What if I want to do the "mobile" activities on a desktop, or zoomed in enough to trigger the viewport changes? What if I want to do the "desktop" activities on a mobile? If you want to make a separate mobile site, make a separate mobile site: don't use effective screen resolution as a proxy.
Yeah, you can definitely fuck it up, but the same can be said for flex/grid/masonry layouts yet you wouldn't say "and grid spanning errors on resize are why we never use grid". HN does a lot "wrong" in its design (like the aforementioned table hell) but that doesn't mean every mistake it makes is a universal thing to avoid and inherent to the way it was implemented. Reddit, YouTube, Facebook, Instagram, Netflix, X, Amazon, and most modern sites do the same kind of content box snapping on either the header or content area (or both) via breakpoints. Most do it a lot better than HN :). The top site I know that doesn't really do that (but it does do the minor breakpoint things) is Yahoo.com.
The explainxkcd wiki says that cats were added later, and that permalinks go to a snapshot of the machine. So it's possible all the cats are lower in the machine, or the permalink is from before cats were added. There's some cats at the bottom of the machine right now. https://xkcd.com/2916/#xt=2&yt=105&v=1402
My guess is either compression or stuff lingering in RAM. The CPU can't be smart here since it doesn't know what any of the future ifs will be. It doesn't know they're in order, or unique, or even valid instructions. You could (theoretically; the OS probably wouldn't let you) replace an if with an infinite loop while the program is running.
> Perhaps something like 'the branch for input x is somewhere around base+10X.
That's unlikely. Branch predictors are essentially hash tables that track statistics per branch. Since every branch is unique and only evaluated once, there's no chance for the BP to learn a sophisticated pattern. One thing that could be happening here is BP aliasing. Essentially all slots of the branch predictor are filled with entries saying "not taken".
So it's likely the BP tells the speculative execution engine "never take a branch", and we're jumping as fast as possible to the end of the code. The hardware prefetcher can catch on to the streaming loads, which helps mask load latency. Per-core bandwidth usually bottlenecks around 6-20GB/s, depending on if it's a server/desktop system, DRAM latency, and microarchitecture (because that usually determines the degree of memory parallelism). So assuming most of the file is in the kernel's page cache, those numbers check out.
I doubt it, branch predictors just predict where one instruction branches to, not the result of executing many branches in a row.
Even if they could, it wouldn’t matter, as branch prediction just lets you start speculatively executing the right instruction sooner. The branches need to be fully resolved before the following instructions can actually be retired. (All instructions on x86 are retired in order).
I was expecting to find out how much data YouTube has, but that number wasn't present. I've used the stats to roughly calculate that the average video is 500 seconds long. Then using a bitrate of 400 KB/s and 13 billion videos, that gives us 2.7 exabytes.
I got 400KB/s from some FHD 24-30 fps videos I downloaded, but this is very approximate. YouTube will encode sections containing less perceptible information with less bitrate, and of course, videos come in all kinds of different resolutions and frame rates, with the distribution changing over the history of the site. If we assume every video is 4K with a bitrate of 1.5MB/s, that's 10 exabytes.
This estimate is low for the amount of storage YouTube needs, since it would store popular videos in multiple datacenters, in both VP9 and AV1. It's possible YouTube compresses unpopular videos or transcodes them on-demand from some other format, which would make this estimate high, but I doubt it.
That storage number is highly likely to be off by an order of magnitude.
400KB/s, or 3.2Mbps as we would commonly use in video encoding, is quite low for original quality upload in FHD or commonly known as 1080p.
The 4K video number is just about right for average original upload.
You then have to take into account YouTube at least compress those into 2 video codec, H.264 and VP9. Each codec to have all the resolution from 320P to 1080P or higher depending on the original upload quality. With many popular additional and 4K video also encoded in AV1 as well. Some even comes in HEVC for 360 surround video. Yes you read that right. H.265 HEVC on YouTube.
And all of that doesn't even include replication or redundancy.
I would not be surprised if the total easily exceed 100EB. Which is 100 (2020 ) Dropbox in size.
I mean, it would explain the minutes-long unskippable ads you get sometimes before a video plays. There's probably an IT maintenance guy somewhere, fetching that old video tape from cold storage and mounting it for playback.
I pine for the day when "hella-" extends the SI prefixes. Sadly, they added "ronna-" and "quetta-" in 2022. Seems like I'll have to wait quite some time.
For anyone wondering "queca" would be the normal spelling of the "profanity" although it's probably one of the milder ways to refer to "having sex". "Fuck" would be "foda" and variations. Queca is more of a funny way of saying having sex, definitely not as serious as "fuck".
Hyundai Kona on the other hand was way more serious and they changed it to another Island in the Portuguese market. Kona's (actual spelling "cona") closest translation would be "cunt", in the US sense in terms of seriousness, not the Australian more light one.
> Two of the suggestions made were brontobyte (from 'brontosaurus') and hellabyte (from 'hell of a big number'). (Indeed, the Google unit converter function was already stating '1 hellabyte = 1000 yottabytes' [6].) This introduced a new driver for extending the range of SI prefixes: ensuring unofficial names did not get adopted de facto.
On one hand: just two formats? There are more, e.g. H264. And there can be multiple resolutions. On the same hand: there might be or might have been contractual obligations to always deliver certain resolutions in certain formats.
On the other hand: there might be a lot of videos with ridiculously low view counts.
On the third hand: remember that YouTube had to come up with their own transcoding chips. As they say, it's complicated.
Source: a decade ago, I knew the answer to your question and helped the people in charge of the storage bring costs down. (I found out just the other day that one of them, R.L., died this February... RIP)
For resolutions over 1080, it's only VP9 (and I guess AV1 for some videos), at least from the user perspective. 1080 and lower have H264, though. And I don't think the resolutions below 1080 are enough to matter for the estimate. They should affect it by less than 2x.
The lots of videos with low view counts are accounted for by the article. It sounds like the only ones not included are private videos, which are probably not that numerous.
I did the math on this back in 2013, based on the annual reported number of hours uploaded per minute, and came up with 375PB of content, adding 185TB/day, with a 70% annual growth rate. This does not take into account storing multiple encodes or the originals.
Do you know that for certain? I always suspected they would, so they could transcode to better formats in the future, but never found anything to confirm it.
On all of the videos I have uploaded to my YouTube channel, I have a "Download original" option. That stretches back a decade.
Granted, none of them are uncompressed 4K terabyte sized files. I haven't got my originals to do a bit-for-bit comparison. But judging by the filesizes and metadata, they are all the originals.