This is an excellent ponit, and I don't know where to exactly draw the line ("I know it when I see it"). I personally use "auto" (probably heuristic, maybe soon-ish AI-powered) features to adjust levels, color balance etc. Using AI to add things that are _not at all present_ in the original crossed the line into digital art vs photography for me.
I draw the line where the original pixel values are still part of the input. As long as you’re manipulating something that the camera captured, it’s still photography, even if the math isn’t the same for all pixels, or is AI powered.
But IMO it’s a point worth bringing up, most people have no idea how digital photography works and how difficult it is to measure, quantify and interpret the analog signal that comes from a camera sensor to even resemble an image.
There was the small complication of the fact that the moon texture that Samsung got caught putting onto moon-shaped objects in photos is, of course, the same side of the same moon.
> the moon texture that Samsung got caught putting onto moon-shaped objects in photos is, of course, the same side of the same moon.
Probably not exactly the same side and orientation. https://en.wikipedia.org/wiki/Libration#Lunar_libration: “over time, slightly more than half (about 59% in total) of the Moon's surface is seen from Earth due to libration”
Sort of, kind of, but not shot at the same time, and not at the same location.
I would object slightly less if they made a model (3D or AI) that captures the whole side of the Moon in high detail, and used that, combined with precise location and date/time, to guide resolving the blob in camera input into a high-resolution rendering *that matches, with high accuracy and precision, what the camera would actually see if it had better optics and sensor*. It still feels like faking things, but at least the goal would be to match reality as close as possible.