Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also out of curiosity, I did some quick math regarding that claim you read somewhere.

Cellphone battery charge: I have a 5000mAh cellphone battery. If we ignore charging losses (pretty low normally, but not sure at 67W fast charging)... That battery stores about 18.5 watt-hours of energy, or about 67 kilojoules.

Generating a single image at 1024x1024 resolution with Stable Diffusion on my PC takes somewhere under a minute at a maximum power draw under 500W. Lets cap that at 500*60 = 30 kilojoules.

So it seems plausible that for cellphones with smaller batteries, and/or using intense image generation settings, there could be overlap! For typical cases, I think that you could get multiple (but low single digit) of AI generated images for the power cost of a cellphone charge, maybe a bit better at scale.

So in other words, maybe "technically incorrect" but not a bad approximation to communicate power use in terms most people would understand. I've heard worse!



Your home setup is much less efficient than production inference in a data center. Open source implementation of SDXL-Lightning runs at 12 images a second on TPU v5e-8, which uses ~2kW at full load. That’s 170J or about 1/400th the phone charge.

https://cloud.google.com/blog/products/compute/accelerating-...

https://arxiv.org/pdf/2502.01671


These models do not appear from thin air. Add in the training cost in terms of power. Yes it's capex and not opex, but it's not free by any means.

Plus, not all these models run on optimized TPUs, but mostly on nVIDIA cards. None of them are that efficient.

Otherwise I can argue that running these models are essentially free since my camera can do face recognition and tracking at 30fps w/o a noticeable power draw since it uses a dedicated, purpose built DSP for that stuff.


GPU efficiency numbers in a real production environment are similar.


I doubt, but I can check the numbers when I return to the office ;)


Oh, that's way better! I guess the comparison only holds as approximately true with home setups -- thanks for the references.


My PC with a 3060 draws 200 W when generating an image and it takes under 30 seconds at that resolution, in some configurations (LCM) way under 10 seconds. That's a low end GPU. High end GPUs can generate at interactive frame rates.

You can generate a lot of images with the energy you would use to play a game instead for two hours; generating an image for 30 seconds uses the same amount of energy as playing a game on the same GPU for 30 seconds.


One point missing from this comparison is that cell phones just don’t take all that much electricity to begin with. A very rough calculation is that it takes around 0.2 cents to fully charge a cell phone. You spend maybe around $1 PER YEAR on cell phone charging per phone. Cell phones are just confusingly not energy intensive.


And for reference, it takes around $10/year to run a single efficient indoor LED lightbulb. So charging a cell phone for a years-worth of usage costs less than 1/10th of running an efficient LED lightbulb bulb for the full year.

Again, cell phones are just confusingly not energy intensive.


How about if you cap the power of the GPU? Modern semiconductors have non-linear performance:efficiency curves. It's often possible to get big energy savings with only small loss in performance.


> Generating a single image at 1024x1024 resolution

That's not a very big image, though. Maybe if this were 25 years ago

You should at least be generating 1920x1080, pretend you're making desktop backgrounds from 10 years ago


> Generating a single image at 1024x1024 resolution with Stable Diffusion on my PC takes somewhere under a minute at a maximum power draw under 500W

That's insane, holy shit. That's not even a very large image.

Apparently I was off on my estimates about how power hungry gpus are these days by an order of magnitude.


Why is that "insane?" Drawing the same image in Photoshop, or modeling and rendering it, in the same quality and resolution on the same computer, would require much more time and energy.


> Drawing the same image in Photoshop, or modeling and rendering it, in the same quality and resolution on the same computer, would require much more time and energy.

Right, but there was a point at which we could stop people from doing stupid shit because it's useless and they're bad with money. Now it seems we've embraced irrational and misanthropic spending as a core service.

We honestly just need to take money away from people who obviously have no clue what to do with it. Using AI seems like a perfect signal for people who have lost touch with an understanding of value.


4090s literally melted their power connectors... https://videocardz.com/newz/nvidia-claims-melting-connector-...


A 1024x1024 image seems like an unrealistically small image size in this day and age. That’s closer to an icon than a useful image size for display purposes.


I think you're being hyperbolic. On a 1080p screen that's almost the entire vertical real estate. You'd upscale it if you're going to actually use this thing for "useful purposes" like marketing material, but that's not an icon.


A bit, I do admit. But given the ubiquity of 2k+ screens I don’t think it’s entirely hyperbolic. Closer to an icon in size, I meant, not necessarily usage.


They're not nearly as common as you think.

1920x1080 is still, by far, the dominate desktop and laptop resolution in 2025.


I just chose this step for calculations, as it's the most energy intensive part of my AI workflow. Upscaling / extending to get a usable result is really fast by comparison, and I only do it for a few images. It sort of rounds down to zero.

Most of the energy cost is how many images I have to generate to get a satisfactory one (say 100 with a decent prompt). Looking at the broader picture, the biggest energy cost of all is hiring a human designer for layout / typography and to produce print-ready files. Then managing the manufacturer, haha.

A bit off topic, but hopefully something that will brighten your day: I make physical products, so they have to be perfect (it also means I deal in DPI, not pixels). AI speeds up the number of concepts I can generate and send to contractors. I don't try to copy anyone's specific style, that would be boring. I'm sure there are less wholesome uses of this AI thing, but I feel like I've stumbled into something I'm comfortable with.

Believe it or not, I use all this to sell antiques. Like, genuine physical artifacts made by humans hundreds of years ago. I like to tell the story of the era they are from, bring it to life a little with printed supplements. No LLMs for the writing though, I do the research and writing myself, I enjoy it too much. I don't make much money with it, but it's fun, and a way to tell my country's history.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: