> When I keep them in their native P3 color space, each image is between 50kb and 150kb. With 22 individual images, I’d be sending almost two megabytes of assets, which feels like way too much for decorative images like this!
Love to hear the consideration for folks. I feel like I download twice that in javascript blobs from like 12 analytics companies and ad videos every time I visit e.g. CNN or TechCrunch
It's life or death for some people. I know very poor people on plans that only have a few GBs of transfer and they're burnt out within 3 days of the start of the month. Then they can't use Google Maps or email to find jobs, healthcare, food pantries etc.
I regularly see sites with multi-hundred MB homepages with autoplay videos etc on mobile, where it is harder to uBlock your way out of all the garbage.
I'm confused by this. Colorspace is metadata, not bit depth. It's a mapping. 8 bit P3 will be exactly the same size was an 8 bit sRGB, except for the additional some hundred bytes of metadata describing the mapping (since missing metadata is assumed sRGB).
For example, you can "convert" an sRGB png to P3 (different colors result, but same bits) by doing nothing but tacking on the icc profile metadata from a P3 image.
I added a note about this a bit later in the post, but the issue was that all of the optimization services I use strip out the color profile, and I don’t know enough about webp or avif to re-add it. Other than re-saving it, which increases the size since it decompresses everything.
If only more people were considerate about the amount of bandwidth their websites serve. I just had a conversation where our system generates a CSV and I feel that we should be zipping the file, but I was overridden with a "people today can download large files easily today" and the "not having to unzip is worth it" reasons.
I think a lot of web servers will automatically gzip data if the client accepts it.
You might not even have to do anything config wise if this is for a website and normal web browsers.
If it’s for an app, it might depend on what client library you use but even then I guess a lot of them will automatically support that for you without any special configuration required.
I just took a look at my httpd.conf, and the deflate module is being loaded, but there was no directive on what file types to apply it to that I could find. I'll have to set up a couple of tests to see the before/after of enabling it for a specific file type
I'll have to leave myself a comment in the conf file to remind me that I got a little tech support on a divergent bit of comments on a post about interesting CSS anims =)
Also, instead of compressing files live on every request, you can pre-compress them, put them next to the uncompressed files with the same name but .gz extension, and tell your server to serve those if the client support it [0].
Useful in case of bigger files (so you don't have to compress them on request), and also if you want to use a higher compression level: you're only going to compress them once, not on every request, so you can use a bit more CPU on making them a bit smaller.
Unfortunately, the specific thing I was looking to compress are reports generated live, and are then removed with a TTL type policy.
I would however definitely be interested in having the site's .js files delivered as .gz files. This has been something I have always been curious about and had as a TODO item, but I think this discussion will move it up on my priority list. Luckily, we are a very small site with a laughably small set of users. But nothing wrong with using best practices even so
I'm going to be honest, given your attitude to this as seen in this thread, even if you don't consider yourself full-stack right now, it looks like it's only a matter of time.
You've got the important skill, asking questions and being willing to try, learn and get your hands a little bit dirty =)...
It's funny in that every job I've had since 1999 I have done some sort of bringing up a LAMP stack in some form or fashion. I know enough to search for each new flavor of Linux each place has used, but I would never consider myself nor insult actual full stack peeps by calling myself a full stack person. I'd fail any interview for that kind of position.
Then again, I'd say that for all of the other roles I've ever had. They were all self taught while learning on the job by learning from more experienced coworkers. I know enough to get something done, but I'd never have full faith that it would survive any kind of scrutiny other than it accomplished the task at hand.
Working on mobile it gets even crazier because everything* is shipped to everyone.
"Oh we just need to include this lib for that one tiny thing, who cares."
Turns into
"we're using 7TB of mobile storage around the world for this one tiny feature"
or
"Our app is responsible for ~472TB of mobile storage being used."
(actual numbers I just calculated btw)
*: Some stuff is optimized out, not everything but essentially anything needed to run any part of the app on a users device is included, even if they just need to do one specific thing and nothing else.
By on mobile do you mean working as a native app bloating the app size?
The "just for one thing" also seems to be a common complaint about NPM as well. I'm sure any package/library managed ecosystem suffers this problem, but it really seems the Node gets tagged with it much more frequently.
Yep, so much crud gets included in app binaries that's just dead code feature flagged off permanently.
IMO a lot of companies pinch pinnies on this internally ("focus on performance and app size, the app binary is huge!") while jumping over suitcases of cash ("yes, we absolutely need those 8 analytics libs driven by a json blob, of which we've only used one in the past year due to our contracts expiring 4 years ago").
Love to hear the consideration for folks. I feel like I download twice that in javascript blobs from like 12 analytics companies and ad videos every time I visit e.g. CNN or TechCrunch