The actually harmful AI deepfakes like fake Elon and Mr beast scamming old people with fake crypto, and deepfake porn, are already illegal and still impossible to regulate.
Are you going to fine Google everytime a throwaway account uploads to YouTube.
Let's be realistic here, this law will only be used by celebrities hiring lawyers to attack comical parody.
In practice what that would do is make the platforms even more draconian in how they screw over and abuse the public while giving them the excuse that it isn't the platform's fault, the government is forcing them to be evil.
In practice, crypto scams, Mr. Beast giveaway scams, and many other types of scams are freely displayed on the front page of YouTube, in the #1 spot of Recommended videos. This is what Google is allowing their platforms right now.
Any excuses that originate from stricter duty to protect against obvious scams should be really easy to see through. Personally I don't think they should be asked to act on ambiguous borderline cases where something could be interpreted as a scam by someone, but allowing "send 1 BTC, get 2 back" to display on the front page of Youtube is inexcusable.
Either way, they're going to continue doing whatever they want and it's not like they're going to face consequences in our lifetime.
I agree that those scams being openly on YouTube is an issue, my concern is that we've seen how they blindly abuse the public using retaining DMCA safe harbor as an excuse, so I don't think plainly expecting the company to suppress things will do the job. All of this social media related legislation needs to be rebuilt from the ground up by people who are actually from a current generation.
Since fighting false/abusive claims requires more effort than most people/creators can afford and Google only cares about retaining its safe harbor protection, DMCA is heavily abused by both malicious individuals and large companies to suppress anything they dislike. Technically there's a counter-claim system, but IIRC Google just hands over your personal info to the other party to fight the claim in court, which obviously puts anyone being false claimed in a tough spot (fight the claim and dox yourself to a malicious actor or don't fight the claim and get copyright strikes/lose revenue).
Sure, everyone can see that Google is just making an excuse to not have to spend more on checking the validity of reports, but ultimately that doesn't really change the fact that they continue to get away with it anyway.
> Are you going to fine Google everytime a throwaway account uploads to YouTube.
That kind of depends on Google's modus operandi, doesn't it? Like telecom network not doing enough to combat robo-calls.
If Google wants to reap the profit from allowing people to upload content, they should also perhaps share a part of the responsibility of anything that happens from their own (in)actions?
If Google has ha habit of taking down malicious content, then less responsibility perhaps?
If Google attempts to put into place safeguards versus repeat offenders, then less responsibility again perhaps?
Like if a person is speeding and causes an accident, the government could in theory share part of responsibility if the road they designed and built did not have adequate best-practice safeguards against head-on collisions. A news network could also face legal issues for lies said on it's programmes by guests and hosts regarding voting machines.
Then maybe one monolithic video platform that serves the streaming needs of an audience the size of YouTube's isn't a good business plan.
For decades the mantra of Silicon Valley has been "eat the world" on a given service you provide, alright, fine. Then provide it. And if you can't provide a service to billions of people without the quality of said service going into the shitter, then the suggestion I have for you is to not eat the world.
That's not actually true, as the problem with most of these deepfakes video ads is the fact that they are auto-approved on various platforms with no human oversight.
So what you are saying is that it is impossible to regulate? After all, if the regulation was successful then the ads would be approved only after careful human oversight. But if regulation is impossible then the actors can do essentially whatever they please, including allowing ads to be auto-approved.
Well, you said it wasn't true, but then went on to explain how it has held true thus far. If you actually see it as being possible, why not provide evidence of support about the possibility, not evidence to the contrary?
If it is possible, what is holding us back? It was asserted that the regulation is already in place. That is not the stumbling block. It is getting people to actually enforce the relation that is the hindrance.
Enforcement requires using up finite resources. Enforcement is not happening because we don't have the will to use those resources for that purpose. We have other things we deem more important. What is going to change the will? Without a change in will, it is impossible.
> So what you are saying is that it is impossible to regulate?
No, it would be difficult to regulate, at scale. Just like most things are difficult at scale and history is awash with corporate whining about how doing anything at all to mitigate the harm they do will annihilate them, capitalism and the concept of the free market.
"You can't possibly expect us to get all the children out of the coal mines!"
"How can we function as a business if we have to pay negroes the same wages as whites?"
"The requirement that we have to store pesticides away from employee break rooms is government overreach!"
> No, it would be difficult to regulate, at scale.
The earlier comment pointed out that the regulation already exists. It is just not being regulated – as said to be because it is impossible to regulate. Perhaps you mean it is difficult to regulate? Which is the exact same thing the earlier commenter said.
I mean, sure, in some hypothetical world where there are no constraints one could conceive of how it would be possible to regulate, but in the real world where people have other, competing concerns and only so much time in the day, is there actually the will to regulate it? Without the will then it is, indeed, impossible.
It's not that making content manually I'd "ok* it just requires so much time/skill as not to be a concern for legislators. The high barrier to every is already limiting enough in most cases.
It's like I can't walk into a store here and buy gunpowder but I could visit three different stores for the ingredients and mix them... If I was extremely dedicated.
and how do you do it? user reports? and how do you verify those user reports?
At the end of the day you need an automated system for scale that probably wont be better than what's in place already.
A regulator employs a handful of staff who estimate the number of offences and hands out the fines. If Google disagrees, they can have their day in court instead, where a judge rules on a representative selection of the alleged offences.
The regulator gives everyone a break on the first, say, $100 million of fines a year, to recognise that some things will fall through the cracks.
The regulator is publicly funded, and the fines also go back to the public purse.
The regulator employs some mix of low-level data analysts who click links all day, technologists who build automated review systems, and bureaucrats who update policy documents and talk to politicians and companies. A revolving-door system develops between the regulator and the content moderation industry. The regulator is a generation or two behind tech giants in the sophistication of its systems, but that doesn't matter - it only needs to catch the most egregious offenders.
That would force them to push more informative stuff at the expense of fake inflammatory crap that attracts more views. I don't see a change like that coming anytime soon, but hope never dies.
>Are you going to fine Google everytime a throwaway account uploads to YouTube.
Yes, and Google could use the same AI technology to automatically remove such material at a massive scale. Of course, tons of legitimate material would get wiped out by that system too, but Google doesn't care about that, and neither does the US government.
Yet this is likely part of the background reasoning for laws such as this. Not only is this bill entirely unworkable, & against how things have been done with likenesses for ages, it is a full-on assault on the first amendment, going after the most potent, easily-spreadable political speech out there right now.
Yes. A simple fine for each verified instance prominently displaying a clearly fraudulent advertisement seems like a reasonable solution. Maybe instead of spending engineering cycles thinking of how to jam a larger volume of ads down our throats, Google can develop better ways to detect and reject scams. Win/Win.
>> re you going to fine Google everytime a throwaway account uploads to YouTube.
Sure. Social media companies have been denying responsibility for years, but that doesn't have to be the case. The core problem is anonymity, or inability to verify a source. That and automation that users can just lie to.
Well, if "we" want to prevent a scenario where Alice provides wrong/loose identification to Bob and Bob accepts it, there is only really two realistic ways to do so: force Alice to not do this, or force Bob to check (maybe with "our" aid). Or do you propose having Victor who would regularly come by Bob and check all of the new identifications provided from the all of Alices in the meantime?
I think yellowsir actually has a point, and I'm the one they responded to. For search I think Google should care, as their users will eventually leave if they can't find reliable information. YouTube on the other hand... I don't see any business reason for them to care about content quality or authenticity - they make money from every video people watch.
To my first point, I don't know how any Google competitor is going to provide more reliable information until the root problem of authenticating sources is solved. So maybe there is really no reason for them to care after all.
I've reported fake Elon musk crypto scam streams on accounts that were taken over by scammers, each time it too youtube over 24 hours to stop the scammers from streaming, let alone restored the channel to the rightful owner.
I mean we have DMCA that requires mass enforcement of copyright, so the "fine YouTube every time some throwaway account uploads" ship already sailed.
Honestly, in this age of bulk uploaded disinfo, I'm kinda disappointed that only copyright gets that kind of extreme protection... but I suppose it's much easier to do copyright since IDing an exact copy of a given source is infinitely simpler than detecting eg defamation.
DMCA gets hosts out of being fined for user uploads or even having to think much about it. YouTube has their own system that goes well beyond DMCA, probably because they want to encourage the copyright owner to agree to accept monetization of the infringing material instead of having it removed.
For a host that just sticks to DMCA it is quite simple.
1. A claimant alleges that a user is infringing their copyright.
2. The notifies the alleged infringer and removes the content.
3. If the alleged infringer disagrees that the content is infringing they notify the host.
4. The host notifies claimant that the user disputes the claim, and gives the claimant the user's contact information.
5. If the host does not receive within a couple weeks proof from the claimant that the claimant has filed a copyright infringement lawsuit against the user the host puts the content back up.
Copyright isn't only trivially definable, it's uncontroversial.
Let's say you decided to police "disinformation" about the current Israel/Gaza conflict. In about 5 seconds you'll be inundated with propagandists trying to define what is "true" and what's "disinformation".
From what I read on Wikipedia, this wasn't supposed to be a KYC-related device so I'm not sure I see the point you're making.
Almost every banking/investment/trading/payment provider out there routinely have a KYC mechanism in place to avoid throwaway accounts being used for money laundering (And this is mandated by regulation), so this is definitely a thing. (I'm not particularly advocating for it since it would have serious privacy and freedom of speech implication, but claiming this is an impossible problem is a gross oversimplification).
The actually harmful AI deepfakes like fake Elon and Mr beast scamming old people with fake crypto, and deepfake porn, are already illegal and still impossible to regulate.
Are you going to fine Google everytime a throwaway account uploads to YouTube.
Let's be realistic here, this law will only be used by celebrities hiring lawyers to attack comical parody.