Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is exactly what will happen, just like with cookie warnings, etc.

To be effective, warnings like this have to be MANDATED on the item in question, and FORBIDDEN when not present.

Otherwise you stick a prop 65 "may contain" warning on everything, and it's pointless.

(This post may have been generated by AI; this notice in compliance with AI notification complications.)



The Prop 65 warnings are probably unhelpful even when accurate because they don't show anything about the level of risk or how typical or atypical it is for a given context. (I'm thinking especially about warnings on buildings more than on food products, although the same problem exists to some degree for food.)

It's very possible that Prop 65 has motivated some businesses to avoid using toxic chemicals, but it doesn't often help individuals make effective health decisions.


While you may think it didn’t have an effect a recent 99pi episode covered it and it sounds like it has definitely motivated many companies to remove chemicals from their products.

It’s not perfect but it has had a positive effect https://99percentinvisible.org/episode/warning-this-podcast-...


Beat me to it!

As a non-Californian I’m used to them from the little stickers on seemingly every electronics cable that comes with something I buy.

But from listening to that episode when it came out it sounds like it really has helped a lot, even if it’s also become kind of obnoxious.


  seemingly every electronics cable
If it's something you've bought recently the offending ingredient should be listed. Otherwise, my money would be on lead being used as a plasticizer. Either way at least you have the tools to find out now.


But does it actually benefit the customer?

Like is it one of those things the remove a 1 in a billion chance of cancer, and now have a product that wears out twice as fast leading to a doubling of sales?


Thanks, that's an interesting overview.


Indeed.

First time I was in CA, my then-partner's mother saw a Prop 65 notice and asked why they couldn't just ban the substances.

We were in a restaurant that served alcohol, one of the known substances is… alcoholic beverages.

https://en.wikipedia.org/wiki/California_Proposition_65_list...

Banning that didn't work out so well the last time.


Prop 65 is also way too broad. It needs to be specific about what carcinogens you’re being exposed to and not just “it’s a parking garage and this is our legally mandated sign”


As of 2016 companies are required to list the specific chemical and how to avoid or minimize exposure.


Seems to still be pretty pointless considering that roads and parking lots and garages are all to be avoided if you want to avoid exposure… just stay away from any of those


It's great for things you wouldn't expect. Like mercury in fish, or lead and BPA in plastic.


I have yet to see any of that in practice. Guessing no one is enforcing it.


There was a push to crack down on over labeling, but manufacturers have pushed back quite a bit.

https://www.corporatecomplianceinsights.com/california-warni...


The entire Stanford campus (which is much bigger than a typical university) has a prop 65 warning at the entrance.

898 Bowdoin St https://maps.app.goo.gl/uHTTd7yYtAibAg1QA

Some of the street view passes the sign is washed out. Click through to different times to see the sign.


The "sponsored content" tag on youtube seems to work very well though. Most content creators don't want to label their videos sponsored unless they are, I assume the same goes for AI generated content flags. Why would a manual content creator want to add that?

> This post may have been generated by AI

I doubt "may" is enough.


The "Sponsored Content" tag on a channel should link to a video of face / voice of the channel talking about what sponsored content means in a way that's FTC compliant.


I think the concern is people might use the label out of caution if Adobe has some automatic AI enhancement in your video editor or whatever?


That would be either poor understanding or poor enforcement of the rule, since they specifically list stuff special effects, beauty filters etc as allowed.

A more plausible scenario would be if you aren't sure if all your stock footage is real. Though with youtube creators being one of the biggest groups of customers for stock footage I expect most providers will put very clear labeling in place.


That's a much clearer line though, it's much simpler to know if you were paid to create this content or not. Use of AI isn't, especially if it's deep in some tool you used.

Does blurring part of the image with Photoshop count? What if Photoshop used AI behind the scene for whatever filter you applied? What about some video editor feature that helps with audio/video synchronization or background removal?


This is a problem of provenance (as it's known in the art world) and being certain of the provanence is a difficult thing to do - it's like converting a cowboy coded C++ project to consistently using const... you need to dig deep into every corner and prefer dependencies that obey proper const usage. Doing that as an individual content creator would be extremely daunting - but this isn't about individuals. If Getty has a policy against AI and guarantees no AI generation on their platform while Shutterstock doesn't[1] then creators may end up preferring Getty so that they can label their otherwise AI free content as such on Youtube - maybe it gets incorporated into the algorithm and gets them more views - maybe it's just a moral thing... if there's market pressure then the down-the-chain people will start getting stricter and, especially if one of those intermediary stock providers violates an agreement and gets hit with a lawsuit, then we might see a more concerted movement to crack down on AI generation.

At the end of the day it's going to be drenched in contracts and obscure proofs of trust - i.e. some signing cert you can attach to an image if it was generated on an entirely controlled environment that prohibits known AI generation techniques - that technical side is going to be an arms race and I don't know if we can win it (which may just result in small creators being bullied out of the market)... but above the technical level I think we've already got all the tools we need.

1. These two examples are entirely fabricated


You may be interested in the Content Authenticity Initiative’s Content Credentials. The idea seems to be to keep a more-or-less-tamperproof provenance of changes to an image from the moment the light hits the camera’s sensor.

It sounds like the idea is to normalize the use of such an attribution trail in the media industry, so that eventually audiences could start to be suspicious of images lacking attribution.

Adobe in particular seems to be interested in making GenAI-enabled features of its tools automatically apply a Content Credential indicating their use, and in making it easier to keep the content attribution metadata than to strip it out.


Maybe this could motivate toolmakers to label their own products as “Uses AI” or “AI Free” allowing content creators verify their entire toolchain to be AI Free.

As opposed to today, where companies are doing everything they can, stretching the truth, just so they can market their tools as “Using AI.”


Where do you draw the line on things like Photoshop or Premier where AI suffuses the entire product. Not everything AI is generative AI.


You can't use them - other tools that match most of the functionality without including AI tools will emerge and take over the market if this is an important thing to people... alternatively Adobe wises up and rolls back AI stuff or isolates it into consumer-level only things that mark images as tainted.


This is a great point and I don’t know. We are entering a strange and seemingly totally untrustworthy world. I wouldn’t want to have to litigate all this.


This is depressing, we’re going to intentionally use worse tools to avoid some idiotic scare label. Basically the entire GMO or “artificial flavor” debates all over again.

If you edit this image by hand you’re good, but if you use a tool that “uses AI” to do it, you need to put the scare label on. Even if pixel-for-pixel both methods output the identical image! Just as a GMO/not GMO has no correlation to harmful compounds being in the food, and artificial flavors are generally more pure than those extracted from some wacky and more expensive means from a “natural” item.


> To be effective, warnings like this have to be MANDATED on the item in question, and FORBIDDEN when not present.

I think for it to be effective you'd have to require them to provide an itemized list of WHAT is AI generated. Otherwise what if a content creator has a GenAI logo or feature that's in every video and put a lazy disclaimer.

> (This post may have been generated by AI; this notice in compliance with AI notification complications.)

:D


For something like YouTube, you could have the video's progress bar be a different color for the AI sections. Maybe three: real, unknown, AI. Without an "unknown" type tag, you wouldn't be able to safely use clips.


Yes, AI could have been used anywhere in the production pipeline: AI could be in the script, could be used in the stock photo or video, and more.


The same is true for an asset's licensing/royalty-free status, which creators are surely aware of when pulling these things in.


This will make AI the new sesame allergen [1] — if you aren't 100% certain every asset you use isn't AI-generated, then it makes sense to stick some AI-generated content in and label the video accordingly, out of compliance.

[1] https://www.npr.org/sections/health-shots/2023/08/30/1196640...


Wow. This is an awesome education on why you can’t just regulate the world into what you want it to be without regard to feasibility. I’m sure the few who are allergic are mad, but it would also be messed up to just ban all “allergens” across the board - which is the only effective and fair way to guarantee that this approach couldn’t ever be used to comply with these laws. There isn’t much out there that somebody isn’t allergic to or intolerant of.


>would also be messed up to just ban all “allergens” across the board -

Lol, this sounds like one of those fabels where an idiot king bans all allergens then a week later everyone is starving to death in the kingdom because it turns out that in a large enough population there will be enough different allergies that everything gets banned.


> To be effective, warnings like this have to be MANDATED on the item in question, and FORBIDDEN when not present.

That already happens for foods.

The solution for suppliers is to intentionally add small quantities of allergens (sesame). [1] By having that as an actual ingredient, manufacturers don't have to worry about whether or not there is cross contamination while processing.

[1] https://www.medpagetoday.com/allergyimmunology/allergy/10652...


Disagree. I will proudly write that my work is AI free.


How much AI is enough to warrant it though. Like is human motion-capture based content AI or human? How about automatic touchup makeup? At what point does touch-up become face swap?


I’ve found Prop 65 warnings to be useful. They’re not pervasively everywhere; but when I see a Prop 65 warning, I consciously try to pick a product without it.


I think the opposite will happen, non-AI content will be "certified organic"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: