I worked on a DARPA anti-deepfakes project up until spring 2021, so just before the real tidal wave of generative AI. At that time, state of the art (of publicly known tech) required a few hours of target footage to train something passably deepfaked. Since then, there's been huge advancements in the generalizability of models. I don't know how little the threshold is, but it has gone from "only really feasible on celebs/politicians/folks with extensive video presence" to "feasible from a handful of videos". Like your average American's social media footprint.
You still need a pretty beefy rig (array of multiple 4090 gpus) to do convincing video generation in a non-glacial amount of time but it's totally possible with readily available hardware.
The bigger problem is actually "cheapfakes", so many people are so confirmation-biased that they will readily amplify even poorly put-together disinformation.
You still need a pretty beefy rig (array of multiple 4090 gpus) to do convincing video generation in a non-glacial amount of time but it's totally possible with readily available hardware.
The bigger problem is actually "cheapfakes", so many people are so confirmation-biased that they will readily amplify even poorly put-together disinformation.