> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
> "When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of Sandfall Interactive confirming the use of gen AI art in production on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination."
Whatever placeholder you use is part of your development process, whether it ships or not. Saying they used none when they did is not cool and rightfully makes one wonder what other uses they may be hiding (or “forgetting”). Especially when apparently they only clarified it when it was too late.
I can understand the Indie Game Awards preferring to act now. Had they done nothing, they would have been criticised too by other people for not enforcing their own rules. They no doubt would’ve preferred to not have to deal with the controversy. Surely this wasn’t an easy decision for them, as it ruined their ceremony.
We’re all bystanders here with very little information, so I’d refrain from using unserious expressions like “witch hunt”, especially considering their more recent connotations (i.e. in modern times, “witch hunt” is most often used by bad actors attempting to discredit legitimate investigations).
> Whatever placeholder you use is part of your development process, whether it ships or not. Saying they used none when they did is not cool and rightfully makes one wonder what other uses they may be hiding (or “forgetting”). Especially when apparently they only clarified it when it was too late.
If it was malicious they wouldn't say a word. They probably interpreted the rule as "nothing in shipped game is AI" (which is reasonable interpreteation IMO), they implemented policy to replace any asset made by AI and just missed some texture.
Also the term was pretty vague, like, is using automatic lipsync forbidden ? That's pretty much generative AI, just the result is not picture but a sequence of movements.
> Saying they used none when they did is not cool and rightfully makes one wonder what other uses they may be hiding (or “forgetting”). Especially when apparently they only clarified it when it was too late.
The article where Meurisse admitted to using AI in the pipeline is from April. You're implying a level of dishonesty that clearly isn't there.
> “witch hunt” is most often used by bad actors attempting to discredit legitimate investigations).
By that logic, "fake news" is now unusable because Trump weaponized it, despite the term accurately describing a real phenomenon that existed before and after his usage. "Gaslighting" would be suspect because it got picked up by people dramatizing ordinary disagreements. Every useful term for describing social dynamics gets captured by someone with an agenda eventually.
Hitler liked chocolate, doesn't mean you shouldn't eat chocolate. "You used a word that bad people also use" is not interesting - it's a way of avoiding the object-level debate while still claiming moral high ground.
Gaslighting would be simply incorrect, since gaslighting refers to an elaborate scheme of making somebody doubt their own perception/sanity. It a a severe form of abuse, requires an ongoing relationship with power dynamics (it cannot happen from a single instance of interaction), and typically results in long-term PTSD for the victim(s).
Agree on the capturing. Watering down terms is highly unfortunate for everyone.
I don’t care if the whole game from end to end is generative AI if it’s an incredible game. Having a moral stance against a specific use of floating point numbers and algorithms in a medium filled with floating point numbers and algorithms is strange.
Describing something so incredibly poorly, that it can be used to describe computing as a whole, only to avoid needing to take a moral stance is strange.
it's an (maybe the most) extreme example of something which is "gen AI" but not problematic
and as such a naive "rule" saying "no gen AI at all" is a pretty bad competition rule design
I agree, even though I'm not in favour of gen ai. It was a terrible mistake letting placeholder assets get out in the final release, but it shouldn't actually count as shipping AI-generated content in your product.
> It literally is shipping AI generated content in the product.
When someone goes three miles per hour over the speed limit they are literally breaking the law, but that doesn’t mean they should get a serious fine for it. Sometimes shit happens.
Countries with sane laws include a tolerance limit to take into account flaws in speedometers and radars. Here in Brazil, the tolerance is 10%, so tickets clearly state "driving at speed 10% above limit".
That is not sane, it is dumb. With such a system, you have signs that say "100" but the actual speed limit is "110" and everyone knows the actual speed limit is "110" but they all have to do mental math to reach that conclusion. Just make the sign say the real speed limit instead of lying to you. It's like Spinal Tap wrote your laws.
It’s not dumb, it’s accounting for real world variance in car speedometer accuracy and possible inaccuracies in the measurement process, just because your car is telling you you went 98 or the speed camera is telling you you went 101 doesn’t mean that was the actual speed of your car at the moment.
Speed limits are limits, not targets. That's why they're called speed *limits*. You account for variance in the speedometer and the reading device by staying under the limit, not treating it as a target.
I hope this does not come across as antagonistic but isn’t this then another form of mental math again? "I’m actually not allowed to drive the number on the sign but I’m also not allowed to drive a speed within the margin of error so I could be falsely accused of speeding."
The other way around seems more clear in a legal sense to me because we want to prove with as little doubt as possible that the person actually went above the speed limit. Innocent until proven guilty and all that. So we accept people speeding a little to not falsely convict someone.
So your speedo reads 100 km/h in a 100km/h hour zone. The intention is that you just treat that as a sign that you're at the limit and don't go faster.
Yes, you _could_ do some mental math and figure out that your speedometer is probably calibrated with some buffer room on the side of overreporting your speed, so you're probably actually doing 96km/h and you know you probably won't get dinged if you're dong 105km/h so you "know" you can probably do 110km/h per your speedometer when the sign is 100km/h.
Or you could just not. And that's the intention. The buffers are in there to give people space for mistakes, not as something to rely on to eke 10% more speed out of. And if you start to rely on that buffer and get caught on it, that's on you.
As a driver, I control my speed for a variety of factors, but I assume no responsibility for the variance in the speed checking device. That’s on the people deploying them to ensure they’ve done their job (and is part of the reason tickets aren’t issued for 1kph/1mph over in most jurisdictions).
I understand where you’re coming from, but it’s perfectly sane if your legal system recognizes and accepts that speed detection methodologies have a defined margin of error; every ticket issued for speeding within that MoE would likely be (correctly) rejected by a court if challenged.
The buffer means, among other things, that you don’t have to bog down your traffic courts with thousands of cases that will be immediately thrown out.
So the sign says "100", the police read your speed at "112" but the device has a 5% MoE and in this case your actual speed was 107. Seems like you have exactly the same problem because the laws state the actual speed limit was "110" which you are under, despite being over the posted limit and the police reading you as over both the real and posted limits.
I think the metaphor here would be more like getting your license permanently suspended for going 3 mph over. Whether that happens anywhere or not in reality, the point is, it would be an absurd overreaction.
Not getting the "didn't go over the speed limit" award when you did in fact go over the speed limit shouldn't be a big deal to anyone.
Nobody is preventing the studio from working, or from continuing to make (ostensibly) tons of money from their acclaimed game. Their game didn't meet the requirements for one particular GOTY award, boo hoo
But you’re also not supposed to drive as close to the speed limit as possible. That number is not a target to hit, it’s a wall you should stay within a good margin of.
I understand analogies are seldom flawless, but the speed limit one in particular I feel does not apply because you can get a fine proportional to your infraction (go over the limit a little bit, small fine; go over it a lot, big fine) but you can’t partially retract an award, it’s all or nothing.
Whether “everyone does it” has no bearing on it being what should be done. Most people also speed up on yellow lights, but you should be doing the exact opposite.
This depends on the country. In certain countries, speed limits are set by civil engineers as a true upper limit that one is not supposed to exceed. In others, speed limits are set slightly above the average speed one is expected to drive at.
In the former sort of country, drivers are expected to use their judgement and often drive slower than the limit. In the latter sort of country, driving at the speed limit is rather... limiting, thus it is common to see drivers slightly exceeding the speed limit.
(I have a theory in my head that – in general – the former sort of country has far stricter licensing laws than the latter. I am not sure if this is true.)
The problem I have with the whole "licensing standards" thing is that, for everyday activities for most of the population, it's not realistic to regulate to the point that there are really substantial barriers to entry to the degree there are for flying in general. And experience probably counts for more than making people shell out a couple thousand more for courses.
I believe in giving someone a reasonable amount of time to correct their mistakes. Yes, it was a terrible mistake to release the game in that state, but I think correcting it within days is sufficient.
It's not a "terrible mistake" to accidentally ship placeholder textures. Let's tone it down just a wee bit, maybe.
Anyway, I don't agree with banning generative AI, but if the award show wants to do so, go ahead. What has caused me to lose any respect for them is that they're doing this for such a silly reason. There's a huge difference between using AI to generate your actual textures and ship those, and.... accidentally shipping placeholder textures.
It really illustrates that this is more ideological than anything.
If you ever make a typo on an official document, would you like that to be not correctable and you forever be responsible for the results? Yeah, that's about that level of silly.
Even if a developer had used an AI tool to ask a question about a library, it would have been a lie.
The question is stupid and I think Sandfall should be given the benefit of doubt that they interpreted the question not literally, but in a way which actually makes sense.
Yeah I'm fine with replacing generic stuff with generic AI stuff. Or cutting out the boring part, nobody needs to spend hours manually lip-syncing character or generating thousands of intermediate movement animation steps.
When genAI started making waves my first thought literally was how awesome it would be to flesh out NPC dialog.
It’s immersion breaking to try and talk to a random character only to hit a loop of one or two sentences.
How awesome would it be for every character to be able to have believable small talk, with only a small prompt? And it wouldn’t affect the overall game at all, because obviously the designers never cared to put in that work themselves
I don't find it that surprising. The creatives that are against generative AI aren't against it only because it produces slop. They are against it because it uses past human creative labor, without permission or compensation, to generate profit for the companies building the models which they do not redistribute to the authors of that creative labor. They are also against it due to environmental impact.
In that view, it doesn't matter whether you use it for placeholder or final assets. You paying your ChatGPT membership makes you complicit with the exploitation of that human creative output, and use of resources.
I disagree, this is the worst reason to be against it. It's choosing horses over trains. Manual labor over engines, mail over e-mail. It's basically purely egotistical, placing something as fleeting as your current job over the progress of humanity.
That's a much easier stance to take for people who are not facing loss of income. If we had wealth redistribution mechanisms in place, I think more people would be pro ai.
I wish we could just land on a remedy for this, specifically. "Everyone who'd ever posted to deviantArt, ArtStation, etc., before they were scraped gets a dividend in perpetuity." And force MANGAF to pay. Finally, a way for their outsize profits to flow to the people who've been getting the shit end of the compensation stick since online art platforms and social media became a thing.
It'll never happen because the grift is the point.
Except it uses existing art transformatively, which means that even under our absurd, dystopian IP laws, it’s not exploitation. There isn’t a single artist out there who wouldn’t be running afoul of copyright law if that wasn’t the case.
It’s been insane to me to watch the “creative class”, long styled as the renegade and anti-authoritarian heart of society, transform into hardline IP law cheerleaders overnight as soon as generative law burst onto the scene.
And the environmental concerns are equally disingenuous, particularly coming from the video game industry. Please explain to me how running a bunch of GPUs in a data center to serve peoples LLM requests is significantly more wasteful than distributing those GPUs among the population and running people’s video games?
At the end of the day, the only coherent criticism of AI is that it stands to eliminate the livelihood of a large number of people, which is perfectly valid concern. But that’s not a flaw of AI, it’s a flaw of the IP laws and capitalistic system we have created. That is what needs addressing. Trying to uphold that system by stifling AI as a technology is just rearranging deck chairs on the Titanic.
That should be the crux of the issue, and stated plainly.
This is just another scheme where those at the top are appropriating the labor of many to enrich themselves. This will have so many negative consequences that I don't think any reactions against it are excessive.
It is irrelevant whether AI has "soul" or not. It literally does not matter, and it is a bad argument that dillutes what is really going on.
There is still human intentionality in picking an AI generated resource for surface texture, landscape, concept art, whatever. Doubly so if it is someone that create art themselves using it.
This is just another scheme where those at the top are appropriating the labor of many to enrich themselves. This will have so many negative consequences that I don't think any reactions against it are excessive.
When's the last time someone with your opinion turned out to be right in the long run?
The creatives that are the loudest voices against AI for art asset generation in my experience are technically competent but lacking any real pizzazz or uniqueness that would set them apart from generated art, so they feel extremely threatened.
There's also been an extremely effective propaganda campaign by the major entertainment industry players to get creatives to come out against AI vocally. I'd like to see what percentage of those artists made the statement to try and curry favor with the money suits.
Without making a judgment call on quality, it is definitely established artists who rely largely on their technical ability for a living (and their hangers-on) who are most vocal. And they focus on the dual indignities of their style being easily-reproducible in aggregate, but also each individual work having glaring mistakes that they'd never make, while ignoring the actual point of theft - when model builders scraped their work specifically for use in a commercial product.
>There's also been an extremely effective propaganda campaign by the major entertainment industry players to get creatives to come out against AI vocally.
That is not the sort of thing people are referring to when they use the term “generative AI”. It’s basically a completely different technology and the ethical concerns around data sourcing and energy usage are not the same at all.
It's extremely tiring how people pretend like there's no difference between these technologies. The comments on the article are the epitome - "oh they used a computer to make a computer game, the horror"
There is no difference. What about a dungeon hack game that uses generated mazes? Random level generators put level designers out of work, but you never saw anyone carrying signs and carrying on about those.
When did random level generators advance to the stage of generating rich background art in the style of long term DeviantArt contributors simply by rolling a few PRNGs ?
Was that before or after real people had their work scraped w/out permission or acknowledgement?
Describing these things as having no difference appears delibrately obtuse.
The problem of allowing "placeholder AI assets" is that any shipped asset found to be AI is going to be explained away as being "just a placeholder". How are we supposed to confirm that they never meant to ship the game like this? All we know is that they shipped the game with AI assets.
Adding to that: 'it was a placeholder' has been used to excuse direct (flagrant) plagiarism from other sources, such as what happened with Bungie and their game Marathon
How? We don't have access to their version control. How do you validate an external version control to be accurate and reflective of the state years ago? Git histories can be rewritten as one pleases.
But creating and picking those placeholders used to be somebody's job, maybe a junior artist. Now they're automated off the back of somebody else's work. And here we have an admission, but how many artists are being sidestepped in major games developers now? It won't be long before the EAs and Ubisofts of the world fire theirs. Then it'll be developers. Then it'll just be a committee of dolphins picking balls to feed into a black box that pumps out games.
It doesn't seem strange that an industry award protects the workers in the industry. I agree, it seems harsh, but remember this is just a shiny award. It's up to the Indie Game Awards to decide the criteria.
> But creating and picking those placeholders used to be somebody's job, maybe a junior artist.
Is it really though? After all it's just maybe a junior artist.
I've had to work with some form of asset pipeline for the past ten years. The past six in an actual game though not AAA. In all these years, devs have had the privilege of picking placeholders when the actual asset is not yet available. Sometimes, we just lift it off Google Images. Sometimes we pick a random sprite/image in our pre-existing collection. The important part is to not let the placeholder end up in the "finished" product.
> It's up to the Indie Game Awards to decide the criteria.
True and I'm really not too fond of GenAI myself but I can't be arsed to raise a fuss over Sandfall's admission here. As I said above, the line for me is to not let GenAI assets end up in the finished product.
> But creating and picking those placeholders used to be somebody's job, maybe a junior artist.
This argument in this industry is problematic. The entire purpose of computers is to automate processes that are labor intensive. Along the way, the automation process went from doing what was literally impossible with human labor to capturing ever deeper levels of skill of the practitioners. Contrast early computer graphics, which involved intensive human intervention, to modern tools. Since HN almost certainly has more developers than graphics artists, contrast early computer programming (where programmers didn't even have the assistance of assemblers and where they needed a deep knowledge of the computer architecture) to modern computer programming (high level languages, with libraries abstracting away most of the algorithmic complexity).
I don't know what the future of indie development looks like. In a way, indie development that uses third-party tools that captures the skills of developers and graphics artists traditionally found in major studios doesn't feel very indie. On the other hand, they are already doing that through the use of game engines and graphics design/modelling software. But I do know that a segment of the industry that utterly ignores those tools will be quickly left behind.
It's bad because it takes someone's job? However, that job was mundane petty work that seniors didn't want to bother with. Were cars terrible for taking all of those stableboy jobs? Is Excel or data engineering terrible for the obliteration of data entry and low level bookkeeping jobs? Or is it not just a slippery slope argument, when what's happening is IMO evolution of tech? IMO People will adapt. While it's up to any event organizers to decide their rules, AI witch-hunts are a Luddite response. AI/LLM can be major tools in the belt of indies to dethrone AAA. I'd like to be clear that I'm arguing in favor of tooling such as the example of placeholder usage and a pipeline to remove it. I wouldn't defend a scumbag leveraging AI to ripoff another game, artist, or dev. It just seems like the lines are being blurred to justify AI witchhunts.
The game industry, especially AAA, is actually having major identity crisis right now as technology evolves and jobs adapt around the new tool of AI/LLMs. The game awards (not indie) should demonstrate this dolphin committee you fear already exists because the limiting factor in all industries are major resources: time, capital, experience. AI/LLMs will enable far more high skill work to be accomplished with less experience, time, and possibly capital (sidestepping ethics/practicality of data centers).
It's not about the asset it's about them first claiming that they did not use gen ai during production. One is an oopsie and the other a blatant lie. If the award requirements say you can't participate if you used generative AI and you lie about it it's a pretty clear cut case. Either be certain you don't ship AI placeholders or just don't lie. The outrage in this thread and hyper focusing on the asset instead of the lie is the problem.
There is a small irony that the Indie Game Awards rejects nominations of games using AI but The Game Awards does not. It is independent teams of developers who are less likely to be able to afford to pay an artists who may be able to produce something of value with AI assets that they otherwise would not have the resources for. On the other side, it is big studios with a good track record and more investment who are more likely to be able to pay artists and benefit from their artistry.
To me, art is a form of expression from one human being to another. An indie game with interesting gameplay but AI generated assets still has value as a form of expression from the programmer. Maybe if it's successful, the programmer can afford to pay an artist to help create their next game. If we want to encourage human made art, I think we should focus on rewarding the big game studios who do this and not being so strict on the 2 or 3 person teams who might not exist without the help of AI.
(I say this knowing Clair Obscur was made by a large well respected team so if they used AI assets I think it's fair their award was stripped. I just wish The Game Awards would also consider using such a standard.)
I agree that this holds in theory, but in practice? All the overhyping of AI I've heard from the gaming sector has come from the big studios, not indies. And, as you point out, Clair Obscur isn't the 'most indie' of indies anyway.
That's what semi-recent whining about Larian saying they use AI was about.
They just use it to cut some of the boring work and iterate over some ideas, once the idea is in stone actual artist does it.
I don't see the problem because it isn't cutting more artists out of the loop, if anything they get more of the meaningful work
It doesn't have to be hyped to be used, for example today I found these two building their passion project using GenAI, which would otherwise maybe not possible, who knows: https://reddit.com/comments/1prqfsu
Who is hyping the technology doesn't seem to be too relevant. Big studios have a bigger megaphone and, as another has pointed out, possibly even a financial motivation for shouting it from the rooftops for their investors to hear.
Simple fix, they need a separate categories for a game art award—no AI—and the rest of the categories (perhaps including game of the year, best new game) should allow AI.
Right now the rules they're using are going against larger forces in the world that are going to become standard (if they're not already).
And to your point, these are indie developers that are David's going up against the AAA Goliath's that have a bottomless purse with which to shower money on a "product". I dabble in art (and wrote some indie games decades ago) and I am fine with AI-generated art (despite my daughters' leanings in the opposite direction).
I'd agree if this were about The Game Awards or similar, where indie devs are expected to compete against the AAA goliaths, but I've always understood the Indie Game Awards as being more about the craft then the end product.
From the FAQ:
> The ceremony is developer-focused, with game creators taking center stage to present awards, recognize their peers, and openly speak without the pressure of time constraints.
Regardless of AI's inevitability, I don't particularly care to celebrate it's use, and I think that's the most reasonable stance for a developer focused award to take.
That's a good point—this being the indie game awards. I still think it makes sense to have separate categories that allow for AI-generated content but "indie developed" (versus an "Indie Art Award" that absolutely prohibits AI-generated content).
We should be able to celebrate the creation, execution, concept of a game without letting AI assets nullify the rest.
It's the same thing as local restaurants being picky about using organic and environmentally sustainable ingredients while big chain corporations have a preference for low cost ingredients that strip the environment bare. The big corporations could afford organic stuff, but their aim is to just get a product out there and get it done cheaply. The local restaurant can't often compete on price alone, so they sell themselves as being made with care for the consumer. Selling one's product as a moral option has been a fairly reliable marketing tactic for a long time and I'm kind of surprised it's taken this long to enter the gaming industry.
This makes the most sense to me. I expect BigCorps to maximize profit and destroy their product to the point that it's slop. I don't expect indie developers or people "doing it for the craft" to make (or use) slop.
There's not that much irony considering how people into indie games are more about the art and craft of video games, whereas The Game Awards is a giant marketing cannon for the video game industry, and the video game industry has always been about squeezing their employees. If they can hire fewer artists and less QA because of GenAI, they're all for it.
Just two days ago there were reports that Naughty Dog, a studio that allegedly was trying to do away with crunch, was requiring employees to work "a minimum of eight extra hours a week" to complete an internal demo.
> To me, art is a form of expression from one human being to another. An indie game with interesting gameplay but AI generated assets still has value as a form of expression from the programmer.
How though? If questions about style or substance can be answered with "because the AI did it, its just some stochastic output from the model" I don't see how that allows for expression between humans.
In this case, you'd be judging the AI made assets as simply AI made and the human made gameplay and programming as human made. I'm not suggesting the AI assets would be transformed into art just because they are part of some human creative work.
I would consider myself pretty embedded in the gaming space, and I hadn't heard of the "Indie Game Awards" before yesterday. Last year's award show has <100k views on youtube, and the first article mentioning this (insider-gaming.com's) is written by one of the judges involved.
I'll leave it up to the reader to judge how much of this is genuine and how much is jumping the twitter bandwagon to boost the award show's popularity.
That was probably The Game Awards[0], which is a big deal. I guess whoever is behind Indie Game Awards sees the name confusion as a feature rather than a bug.
I'm not trying to defend "The Indie Game Awards", which I also have never heard of, but The Game Awards are universally acknowledged to be a joke, and always have been. By runtime, it's 80% soulless, samey trailers for AAA games, 10% Imagine Dragons, 5% rooting for that coked-up clarinet player on the edge of the orchestra, and 5% Jeff Keighley rapid-firing off the winners of made-up award categories in under five seconds each.
He just pointed out (correctly) that the game awards that were being spoken of everywhere for the last few weeks were not the one related to this article.
I on the other hand will add a judgement to this discussion: if you consider the game awards a joke, which is the by far most watched event in gaming, eclipsing (by viewer count) other entertainment events in sports such as the NBA finals... You've certainly got "interesting" opinions.
The Game Awards 2025 had more viewers than the Superbowl with a total of 171 million global livestreams vs The Indie Game Awards (7.1k Youtube views and 433 Twitch views).
I don’t think the name confusion can really be blamed on “The Indie Game Awards”, it has to be on “The Game Awards” for choosing the most generic possible name.
Mostly daily browsing of twitter and reddit, r/livestreamfail + various discord communities. Note that i have heard lots about The Game Awards, but this is a different event.
How could it be spammed in previous years since this is only the 2nd year of "The Indie Game Awards". Not to mention the event only had less than 7.5k total views to the 171 million for the "The Game Awards".
I bet if they'd only used AI assisted coding would be a complete non-event, but oh no, some inconsequential assets were generated, grab the pitchforks!
You think there’s any non niche game developers not using a coding assistant at this point? You think Epic is not using code assistants to develop Unreal Engine?
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
> rightfully criticised because it steals from artists. Generative AI for source code learns from developers
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
IMO, this is key to the issue, learning != stealing. I think it should be acceptable for AI to learn and produce, but not to learn and copy. If end assets infringe on copyright, that should be dealt with the same whether human- or AI-produced. The quality of the results is another issue.
> I think it should be acceptable for AI to learn and produce, but not to learn and copy.
Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.
Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.
Yeah right. AI art models can and have been used to basically copy any artist’s style many ways that make the original actual artist’s hard work and effort in honing their craft irrelevant.
Who profits? Some tech company.
Who loses? The artists who now have to compete with an impossibly cheap copy of their own work.
This is theft at a massive scale. We are forcing countless artists whose work was stolen from them to compete with a model trained on their art without their consent and are paying them NOTHING for it. Just because it is impressive doesn’t make it ok.
Copying a style isn’t theft, full stop. You can’t copyright style. As an individual, you wouldn’t be liable for producing a work of art that is similar in style to someone else’s, and there is an enormous number of artists today whose livelihood would be in jeopardy if that was the case.
Concerns about the livelihood of artists or the accumulation of wealth by large tech megacorporations are valid but aren’t rooted in AI. They are rooted in capitalism. Fighting against AI as a technology is foolish. It won’t work, and even if you had a magic wand to make it disappear, the underlying problem remains.
I love that in these discussions every piece of art is always high art and some comment on the human condition, never just grunt-work filler, or some crappy display ad.
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
Exactly! Europa Universalis is a work of art, and I couldn't care less if the horse that you can get as one of your rulers is aigen or not. The art is in the fact that you can get a horse as your ruler.
I agree, computer graphics and art were sloppified, copied and corporate way before AI, so pulling a casablanca "I'm shocked, shocked to find that AI is going on in here!" is just hypocritical and quite annoying.
That's a fun framing. Let me try using it to define art.
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
> Art eludes definition while asking questions about what it means to be human.
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
Are you telling me that, for example, rock texture used in a wall is "asking questions about what it means to be human"?
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
I really don't agree with this argument because copying and learning are so distinct. If I write in a famous author's style style and try to pass my work off as theirs, everyone agrees that's unethical. But if I just read a lot of their work and get a sense of what works and doesn't in fiction, then use that learning to write fiction in the same genre, everyone agrees that my learning from a better author is fair game. Pretty sure that's the case even if my work cuts into their sales despite being inferior.
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
I always believed GPL allowed LLM training, but only if the counterparty fulfills its conditions: attribution (even if not for every output, at least as part of the training set) and virality (the resulting weights and inference/training code should be released freely under GPL, or maybe even the outputs). I have not seen any AI company take any steps to fulfill these conditions to legally use my work.
The profiteering alone would be a sufficient harm, but it's the replacement rhetoric that adds insult to injury.
This cuts to the bone of it tbqh. One large wing of the upset over gen AI is the _unconsenting, unlicensed, uncredited, and uncompensated_ use of assets to make "you can't steal a style" a newly false statement.
There are artists who would (and have) happily consented, licensed, and been compensated and credited for training. If that's what LLM trainers had led with when they went commercial, if anything a sector of the creative industry would've at least considered it. But companies led with mass training for profit without giving back until they were caught being sloppy (in the previous usage of "slop").
No, the only difference is that image generators are a much fuller replacement for "artists" than for programmers currently. The use of quotation marks was not meant to be derogatory, I sure many of them are good artists, but what they were mostly commissioned for was not art - it was backgrounds for websites, headers for TOS updates, illustrations for ads... There was a lot more money in this type of work the same way as there is a lot more money in writing react sites, or scripts to integrate active directory logins in to some ancient inventory management system than in developing new elegant algorithms.
But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.
The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.
There is a problem with negative incentives, I think. The more generative AI is used and relied upon to create images (to limit the argument to inage generation), the less incentive there is for humans go put in the effort to learn how to create images themselves.
But generative AI is a deadend. It can only generate things based on what already exists, remixing its training data. It cannot come up with anything truly new.
I think this may be the only piece of technology humans created that halts human progress instead of being something that facilitates further progress. A dead end.
> Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
FOSS code is the backbone of many closed source for-profit companies. The license allows you to use FOSS tools and Linux, for instance, to build fully proprietary software.
Well, if its GPL you are supposed to provide the source code to any binaries you ship. So if you fed GPL code into your model, the output of it should be also considered GPL licensed, with all implications.
Sure, that usage is allowed by the license. The license does not allow copying the code (edit: into your closed-source product). LLMs are somewhere in between.
"Mostly" is doing some heavy lifting there. Even if you don't see a problem with reams of copyleft code being ingested, you're not seeing the connection? Trusting the companies that happily pirated as many books as they could pull from Anna's Archive and as much art as they could slurp from DeviantArt, pixiv, and imageboards? The GP had the insight that this doesn't get called out when it's hidden, but that's the whole point. Laundering of other people's work at such a scale that it feels inevitable or impossible to stop is the tacit goal of the AI industry. We don't need to trip over ourselves glorifying the 'business model' of rampant illegality in the name of monopoly before regulations can catch up.
I'm not sure how valid it is to view artwork differently than source code for this purpose.
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
> The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.
On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.
Everyone in this thread keeps treating human learning and art the same as clearly automated statistical processes with massive tech backing.
Analogy: the common area had grass for grazing which local animals could freely use. Therefore, it's no problem that megacorp has come along and created a massive machine which cuts down all the trees and grass which they then sell to local farmers. After all, those resources were free, the end product is the same, and their machine is "grazing" just like the animals. Clearly animals graze, and their new "gazelle 3000" should have the same rights to the common grazing area -- regardless of what happens to the other animals.
Most OS licenses requires attribution, so AI for code generation violates licenses the same way AI for image generation does. If one is illegal or unethical, then the other would be too.
I'm not sure about licenses that explicitly forbid LLM use -- although you could always modify a license to require this! -- but GPL licensed projects require that you also make the software you create open source.
I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).
I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.
Very likely no license can restrict it, since learning is not covered under copyright. Even if you could restrict it, you couldn't add a "no LLMs" clause without violating the free software principles or the OSI definition, since you cannot discriminate in your license.
Note that this tends to require specific license exemptions. In particular, GCC links various pieces of functionality into your program that would normally trigger the GPL to apply to the whole program, and for this reason, those components had to be placed under the "GCC Runtime Library Exception"[1]
The IGA FAQ states, in its entirety on this topic: "Games developed using generative AI are strictly ineligible for nomination." [1]
Sandfall probably interpreted this reasonably: no AI assets in the shipped product. They say they stripped out AI placeholders before release (and patched the ones they missed). But the IGA is reading it strictly: any use during development disqualifies.
If that's the standard, it gets interesting. DLSS and OptiX are trained neural networks in an infrastructure-shaped raincoat—ML models generating pixels that were never rendered. If you used Blender Cycles with OptiX viewport denoising while iterating on your scenes, you "developed using generative AI."
By a strict reading, any RTX-enabled development pipeline is disqualifying. I wonder if folks have fully thought this through.
Nobody's probably thought this through, but if I had to guess, the first revision to the rule will be "no _assets_ generated with gen AI" because the most upset parties about gen AI use in gamedev are asset creatives who create textures, models, and audio, perform music and voice acting, etc.
Upscaling technologies are transformative but post-processing. The uproar isn't over what happens in the render pipeline but in the creative process.
Same reason why auto-LoD generation wouldn't and hasn't pissed anyone off: it's not generating LoDs of a mesh that's problematic, it's generating the source model that an artist would create.
The game won GOTY on its merits. Then the AI disclosure came out and it got stripped. If AI use produces obviously inferior work, how did it win in the first place? Seems like the objection is to the process, not the result.
Doubly so if the usage was de minimis.
I think it's the artists, not the tools, that make the art. Overuse of anything is gauche; but I am confident that beautiful things can be made with almost any tool, in the hands of a tasteful artist.
> Seems like the objection is to the process, not the result.
Right. The game is not eligible for the award. This is not a comment on the quality of the game.
The Indie Game Awards require zero AI content. The devs fully intended to ship without AI content but made a mistake, disqualifying themselves for the award. This is simply how competition rules work. I have a friend who plays competitive trading card games, and one day he showed up to a national event with an extra card in his box after playing with some friends late at night. It was an honest mistake, and the judges were quite sympathetic, but he was still disqualified!
I've never liked the argument that there's some imaginary line between the acceptibility of AI as a tool for creating art and Photoshop/Krita/Procreate/etc as a tool for creating art.
Rubbing a brush on a canvas was good enough for the renaissance masters, why are we collectively okay with modern "artists" using "virtual brushes" and trivializations of the expressive experience like "undo" when it's not "real art" because they're leaning so heavily on the uncaring unthinking machine and the convenience in creation it offers rather than suffering through the human limitations that the old masters did? Are photographers not artists too then, because they're not actually creating, just instead capturing a view of what's already there?
The usual response to this is some trite response about how AI is 'different' because you're 'just' throwing prompts at it and its completely creating the output itself -- as if it's inconceivable that there might be someone who doesn't just shovel out raw outputs from an AI and call it 'art' and is instead actually using it in a contributatory role on a larger composition that they, themselves, as a human, are driving and making artistic decisions on.
E33 is a perfect example here. Is the artistic merit of the overall work lessened by it having used AI in part of its creation? Does anyone really, truly believe that they abdicated their vision on the overall work to machines?
Just because someone can drag and drop to draw a circle in an image editing app instead of using their own talent and ability to freehand it instead doesn't mean what they then go on to do with that circle isn't artistic.
I agree with you, and I frankly wasn't trying to reopen the can of worms about AI & art. As I said, I just don't like that particular line of reasoning about AI usage.
Like most things, art exists on a spectrum and there are many levels. Most would say a single pixel isn't art, yet at some point many cross some invisible line where it becomes art. Likewise, at some point a bunch of logic and pixels become a best-selling indie game. It's more than the sum of its parts, and I don't agree with saying that sum is suddenly less just because one of those parts was AI generated. The sum should logically be the same value regardless.
But then that's a very mathematical way of looking at it. Art and the appreciation of it has never been logical, but instead emotional. AI invokes negative emotions in many people, and so the art is diminished in their eyes. This makes sense to me.
However, I don't necessarily agree with this approach of yanking back the award. It reeks of horse buggy whip manufacturers trying to push back the tide. But then I've never understood comparing one piece of art to another and declaring one the winner. If art is simply something that invokes emotion in the viewer, and everyone's emotional response is different, it makes no sense to have awards to me.
We do make this argument all the time though. Film is probably the number one example in my mind. After actors, we celebrate directors more than any other individual in film. Directors often don’t write the script. They don’t handle the camera or the lighting or sound. They don’t create the music. They don’t do the editing in post. They don’t do the acting. But they do direct all of the people doing those things to achieve an overall vision, and we recognize that has significant artistic merit. Directors are not artists, or cinematographers, or composers, or actors, or visual effects artists, or sound technicians. But they are still artists, because art is more than the technical skill to produce something.
I love debating, but I want debate to learn things, not to walk people into traps.
Obviously the woodworker is a person. And you would be on a team that has woodworking as part of their skillset.
But the way you set up your reductio-ad-absurdum it can be read as implying the AI is a person too. O:-)
You know what, rather than just going for a flip rhetorical takedown, what if we took that implication seriously for a second?
What if you did mean to argue that (the) AI is a proto-person. Say you argue that they deserve to be in the credits as a (junior?) member of the team. That'd be wild! A really interesting framing, which I haven't heard before.
Or the weaker version: Use said framing pro-forma as a (practical?) legal fiction. We already have rules on (C) attribution. It might be a useful framing to untangle some of the spaghetti.
> If AI use produces obviously inferior work, how did it win in the first place?
they uses some AI placeholders during development as it can majorly speed up/unblock the dev loop while not really having any ethical issues (as you still hire artists to produce all the final assets) and in some corner case they forgot to replace the place holder
also some of the tooling they might have used might technically count as gen AI, e.g. way before LLM became big I had dabbled a bit in gen AI and there where some decent line work smoothing algorithms and similar with non of the ethical questions. Tools which help removing some dump annoying overhead for artists but don't replace "creative work". But which anyway are technical gen AI...
I think this mainly shows that a blank ban on "gen AI" instead of one of idk. "gen AI used in ways which replaces Artists" is kinda tone deaf/unproductive.
> AI placeholders during development as it can majorly speed up/unblock
Zero-effort placeholders have existed for decades without GenAI, and were better at the job. The ideal placeholder gives an idea of what needs to go there, while also being obvious that it needs to be replaced. This [1] is an example of an ideal placeholder, and it was made without GenAI. It's bad, and that's good!
A GenAI placeholder fails at both halves of what a placeholder needs to do. There's no benefit for a placeholder to be good enough to fly under the radar unless you want it to be able to sneak through.
I've actually considered hiring artists to help me out a few times too under sort of comparable circumstances? I could use AI to generate basic assets, and then hire artists for the real work! More work for artists, better quality for me. Unfortunately, I fear I'd get yelled at (possibly as a traitor to both sides?)
Frankly, in the wider debate, I think engagement algorithms are partially to blame. Nuanced approaches don't get engagement, so on every topic everyone is split into two or more tribes yelling at each other. Folks in the middle who just want to get along have a hard time.
(Present company excepted of course. Dang is doing a fine job!)
Is anyone else detecting a phase shift in LLM criticism?
Of course you could always find opinion pieces, blogs and nerdy forum comments that disliked AI; but it appears to me that hate for AI gen content is now hitting mainstream contexts, normie contexts. Feels like my grandma may soon have an opinion on this.
No idea what the implications are or even if this is actually something that's happening, but I think it's fascinating
LLM hate for use in art has been pretty mainstream from the start. The difference in criticism between use in code generation and use in art generation is palpable. I dont think anyone took kindly to the discourse of movie producers buying actor likeness rights and having perpetually young old actors for all future movies.
Programmers criticized the code output. Artists and art enjoyers criticized cutting out the artist.
LLMs has had a couple of years by now to show their usefulness, and while hype can drive it for a while, it's now getting to the point where hype alone can't. It needs to provide a tangible result for people.
If that tangible result doesn't occur, then people will begin to criticize everything. Rightfully so.
I.e., the future of LLMs is now wobbly. That doesn't necessarily mean a phase shift in opinion, but wobbly is a prerequisite for a phase shift.
(Personal opinion at the moment: LLMs needs a couple of miracles in the same vein as the discovery/invention of transformers. Otherwise, they won't be able to break through the current fault-barrier which is too low at the moment for anything useful.)
You’re reading it wrong: rather, AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.
What you derogatorily call normies are the rest of the world caring about their business until one day some tech wiz came around to say “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
> AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.
There's a whole lot of that for quite a long time targeting normie contexts, too; in fact, the hate in normie contexts is directly responsive to it, because the hype in normie contexts is a lot of particularly clumsy grifting plus the nontechnical PR of the big AI vendors (which categories overlap quite a bit, especially in Sam Altman’s case), and the hate in normy contexts shows basically zero understanding of even what AI is beyond what could be gleaned from that hyper plus some critical pieces on broad (e.g., total water and energy use, RAM price) and localized (e.g., from fossil fuel power plants in poor neighborhoods directly tied to demand from data centers) economic and environmental impacts.
> What you derogatorily call normies
I am not using “normie” derogatorily, I am using it to contrast to tech contexts.
The most typical reactions I see outside of techie and arty spaces where people are most polarised about it are:
- annoyance at stupid AI features being pushed on them
- Playing around with them like a toy (especially image generation)
- Using them for work (usually writing tasks), to varying degrees of effectiveness to pretty helpful to actively harmful depending on how much of a clue they have in the first place.
Discussion or angst about the morality of training or threats to jobs doesn't really enter much into it. I think this apathy is also reflected in how this has not seemingly affected the sales of this game at all in the months that it has been reported on in the video game press. I also think this is informed by how most people using them can fairly plainly see they aren't really a complete replacement for what they actually do.
They don't call normies derogatorily, they just use it as proxy for "non-tech people"
> “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
Nope, most are just annoyed from AI slop bombarding them at every corner, AI scams getting news of claiming another poor grandma, and AI tech industry making shit expensive. Most people's job are not in current direct threat of being employed, unless you work in tech or art.
It is fascinating. It's showing of course that AI has gone mainstream.
There was a time that I remember when you could gripe at a party about banner ads showing up on the internet and have a lot of blank stares. Or ask someone for their email address and get a quizzical look.
I pointed my dad to ChatGPT a few days ago and instructed him on how to upload/create an AI image. He was delighted to later show me his AI "American Gothic" version of a photo of him and his current wife. This was all new to him.
The pushback though I think is going to be short-lived in a way other push-backs were short-lived. (I remember the self-checkout kiosk in grocery stores were initially a hard sell as an example.)
How many American Gothic AI fake photos do you think he'll make. Sounds like a novelty experience to me. I also loved my first day in Apple's Vision Pro. It was mind blowing. On the 4th day I returned it. Novelty wears off, no matter how cool it might seem initially.
Oh, not disagreeing with you. A strange thing has happened inn the past when the what was novel also becomes the commonplace. Not in all cases, of course (and I personally also believe VR is one of those things that will never become commonplace).
It’s the usual “I don’t like it, I’m against, but it’s okay if I use it” thing. People understand the advantage it gives a person over another one, so they will still use it here and there. You’ll have some people who will be vehemently against it, but it will be the same as people who categorically against having smartphones, or avoiding using any Meta products because of tracking and etc.
Just like feminism when it was starting, back then millions of women believed it was silly for them to vote, and those who believed otherwise had to get loud to get more on their side, and that's one example, similar things have happened with hundreds other things that we now take for granted, so it's value as judgment measure it's very low by itself alone.
We’ve observed this in AI gen ads (or “creatives” as ad people call them)
They work really well, EXCEPT if there is a comment option next to the ad - if people see others calling the art “AI crap” the click rate drops drastically :)
I think that's a hint that people already dislike AI ads on principle but it's good enough now to fool them, and the comment section provides transparency.
If I was vegan and found out after the fact that a meal that I enjoyed contained animal products in it that doesn't mean I'm some hypocrite for consuming it at the time. Whether I enjoyed it or not at the time it still breaches some ethical standard I have, abstaining from it from then on would be the expected outcome.
The same works the other way, and actually a lot better IMO.
Let's imagine a scenario with two identical restaurants with the exact same quality of food.
One sells their dish as a fully vegan option, but doesn't tell the customers.
Hardline "oorah, meat only for me" dude walks in and eats the dish, loves it.
If he goes to the other restaurant and is told beforehand that "sir, this dish is fully vegan" - do you think they'd enjoy it as much?
Prejudices steer people's opinions, a lot. Just like people stop enjoying movies and games due to some weird online witch-hunt that might later on turn out to be either a complete willful misunderstanding of the whole premise (Ghost in the Shell) or a targeted hate campaign (Marvels and many many other movies starring a prominent feminist woman).
Look at how easy it is to make the argument in the other direction:
> People were told by large companies to like LLMs and so they did, then told other people themselves.
Those add nothing to the discussion. Treat others like human beings. Every other person on the planet has an inner life as rich as yours and the same ability to think for themselves (and inability to perceive their own bias) that you do.
Just as they were told to like them in the first place. A lot of this is driven that way because most of the public only has a surface-level understanding of the issues.
It's because amount of AI slop bombarding people from every side increased and created knee-jerk reaction to anything AI, even if it is actually the "remove the boring part of work"
The issue with "removing the boring part of work" is that which part of the work is "boring" is subjective. There are going to be plenty of people that don't think that what they do is the "boring stuff that should be automated away." Whether this is genuine enjoyment for what they do or just an attempt to protect their career, both are valid feelings to have.
The art bubble is generally considered more "normie" than the tech bubble and they've been strongly anti AI art for longer than even the introduction of the original GitHub copilot
It feels like a similar trend to the one that NFTs followed: huge initial hype, stoked up by tech bros and swallowed by a general public lacking a deep understanding, tempered over time as that public learns more of the problematic aspects that detractors publicise.
I don't feel NFTs ever really had much interest among the general public - average reaction just being "I don't get it, that sounds pointless".
Whereas AI seemed to have a pretty good run for around a decade, with lots of positive press around breakthroughs and genuine interest if you showed someone AI Dungeon, DALL-E 2, etc. before it split into polarized topic.
NFTs have way less downsides than LLMs and GenAI, since the main downside was just wasting electricity. I didn't have to worry about someone cloning my voice and begging my mom on the phone for money.
If you look at daytime TV in the UK, there are a lot of ads targeting the elderly talking about funeral cover and life assurance and so on.
I for one cannot wait for a future where grandparents get targeted ads showing their grandchildren, urging them to buy some product or service so their loved ones have something to remember them by...
If a fraction of the AI money would go into innovative digital content creation tools and workflows I'm not sure AI would be all that useful to artists. Just look at all those Siggraph papers throughout the years that are filled with good ideas but lacked the funding and expertise to put a really good ui on top.
I don't think this argument is going to be very compelling. The people whom you would be trying to convince here, would just argue that the Luddites were correct in their fight against human labor being displaced. They'll argue that the power artisans had over their work was diminished with the advent of the loom, just like the power artists have over their labor is being diminished right now.
I don't disagree with your point, but regardless of how you or I feel about it, this flap will likely seem quaint a decade from now. It's the unstoppable way the world is moving.
I'd love for them to create a separate category for "Best non-AI game". They can fight it out over that award. Perhaps then in a decade or so they will quietly let the award category fade away.
You don't really need to win an argument with luddites. Completely rejecting extremely useful technology and then picking a fight with people who don't is a way to speedrun losing, whether you have "compelling arguments" or not. If the Luddites were correct, they wouldn't be dead.
This is crazy. Tools like photoshoot have gen ai tools in them. Does that mean that Photoshop is now a minefield for artists? If a single artist uses the wrong tool once they disqualify the entire final product for awards, even if the asset is fully removed on the final build.
IDEs now have “AI” autocomplete; will a game become ineligible if a single dev accidentally presses tab instead of writing the whole function by hand? If a script writer uses ChatGPT to generate ideas, straight up ban?
Where does the organisation intend to draw the line?
Better blacklist Google as well. You don't want anyone on the team searching anything on Google lest their search accidentally triggers the LLM response (meaning: they prompted Google Gemini).
> Where does the organisation intend to draw the line?
The answer to this question is always "somewhere". Just because I can't proclaim an exact number of trees that constitute a forest doesn't mean the concept doesn't exist.
No, but it becomes a dubious concept when you define forests as a collection of only conifer trees and that deciduous trees don't count for the definition of a forest.
Ultimately this move might have just been to increase visibility for an otherwise niche awards show (which it has clearly done). Also by eliminating the obvious best indie game of the year -- it opens up the field a bit to more "normal" contenders. Expedition 33 is basically a AAA-quality game, its only considered "indie" because a small unknown team made it.
My objection to LLMs is the same that I had for TDD. There's all these people saying that you just gotta try it, but when I do, the effect is lesser than just using my preexisting skills. Oh, it's not for you? Wrong, here's some tautological or contradictory or poetic or nonsensical advice that'll be 'the wrong way' a week from now.
Does TDD and LLMs have a kernel of utility in them, yeah, I don't see why not. But what the majority of people are saying doesn't seem to be true and what the minority of people I can actually see using them 'for reals' are doing just doesn't applicable to anything I care about.
With that in mind, the only thing less real to me than a tool that I have to vibe with at a social zeitgeist level to see benefits from is an award when I already have major financial and industrial success.
Half the people in my team has played the game. For months all I would hear about w.r.t. games was how this game was smashing milestones and causing the entire industry to do some soul searching or putting their fingers in their ears.
I'm sure they can console themselves from having lost this award with their piles of money.
[An LLM did help me with a cryptography api that was pretty confusing, but I still had to problem solve that one because it got a "to bytes" method wrong. So... once in a blue moon acceleration for things I'm unfamiliar with, maybe.]
This does make it a bit more suspicious. It seems unlikely they coincidentally used gen AI placeholders only for the one case where it’s absurdly obvious.
To be consistent, if you wish to protect workers by rejecting artificially produced assets, you should feel the same about textiles produced by industrial machinary. Either this decision was wrong or the Luddites had a good point.
Sure, but for the body of folks offering a gaming award, there is little power they have over the textile industry.
To others you may be addressing, I suspect they would say the ship has already sailed on textiles. Perhaps they are trying to sink this ship before it sails.
To help prevent confusion: Clair Obcur was not stripped of its record-breaking 9 awards at the Game Awards.
The Indie Game Awards, despite sounding similar to The Game Awards, is an unrelated organization that holds their awards the same week. They are small and this is their second year.
> "Generating placeholder assets is completely acceptable, etc."
Not if it's against the rule. They got caught with skidmarks. And while the "Ackshually, those skidmarks are just placeholders"-defense may elicit a few cheap laughs, it doesn't matter if you follow the rule to its logical conclusion. Any possible deception in such cases comes on top of it. As it always has; that doesn't change just because you found a new plaything (LLMs) in the box.
When it comes to AI im more of a luddite at the moment, things change like every 6 months when it comes to prompting the models.
But i don't mind people using AI it's their own choice, the focus then just becomes in the curation skill of the individual, team, company etc of the generated AI output. So taking away the award is kind of weak given people enjoyed the game.
> When it comes to AI im more of a luddite at the moment, things change like every 6 months when it comes to prompting the models. [...] So taking away the award is kind of weak given people enjoyed the game.
To nitpick: the independent game awards are the Luddites here. The Luddites were a protest movement, not just a group of people unfamiliar with technology.
In the historical context that's apparently become appropriate again, Luddites violently protested the disruptive introduction of new automation in the textile industry that they argued led to reduced wages, precarious employment, and de-skilling.
This is the kind of rule that won’t hold water in the very near future. It’s going to be impossible to do certain kinds of things at scale without AI, and media production is a very competitive field. As long as the quality bar is held high and there are professional artists curating its use, I see no problem at all in using AI as a tool.
Gamer social movements always burn bright at first, then die when they demand too much purity to reconcile with the fundamental truths: people want to make games and, when they're good, people want to play them. Trying to stop people from using (even experimenting with!) new tools is doomed, just like the old attempts to boycott games over their business models or their creators' politics/sexuality/whatever.
I wonder what definition of AI they're using? If you go by the definition in some textbooks (e.g., the definition given in the widely used Russell and Norvig text), basically any code with branches in it counts as AI, and thus nearly any game with any procedurally generated content would run afoul of this AI art rule.
That's all I've found as well, but, personally, I find that a bit unclear, for a couple of reasons. First, are they saying that the game itself can use generative AI, but it can't be used in the development of the game? So that would mean that if the game itself generates random levels using a generative AI approach, that's allowed, but, if I were to use that same code to pre-generate and manually modify the levels, that wouldn't be allowed because I'm now using generative AI as part of the development process? I.e., I can create a game that itself is a generative AI, but I can't use that AI I've built as part of the development of a downstream game?
And, second, what counts as generative AI? A lot of people wouldn't include procedural generative techniques in that definition, but, AFAIK, there's no consensus on whether traditional procedural approaches should be described as "generative AI".
And a third thing is, if I use an IDE that has generative AI, even for something as simple as code completion, does that run afoul of the rule? So, if I used Visual Studio with its default IntelliCode settings, that's not allowed because it has a generative AI-based autocomplete?
That's not quite true though, right? Because diffusion models are also generative AI and they're not LLMs. Heck, they probably got disqualified, not for the use of an LLM, but for the use of a diffusion model.
So I think Gen AI is an umbrella. The question is, do older techniques like GANs fall under Gen AI? It's technically a generative technique that can upscale images, so it's generating those extra pixels, but I don't know if it counts.
There's not that much difference between diffusion models and other auto-regressive models (https://www.youtube.com/watch?v=zc5NTeJbk-k). But I'm of the opinion that Generative AI is a terrible umbrella term. It should include basically all of digital art if we take it seriously. The flood fill / paint bucket tool can be considered AI, any program using a search algorithm can be phrased in AI terms of a sense-think-act loop. Nevertheless I do understand what people tend to mean by it when they're raging. Right now it might best be defined in terms of workflow: a human uses natural language to describe what they want, and moments later a plausible image appears trying to match. This clearly separates it from every other tool in the digital artist's program, even many which one could arguably call generative AI. It also separates it from stock-photo/texture searches done externally to some art program, as those are done in a query language rather than natural language.
It's not meant to be clever. They have a rule that says, in its entirety, "Games developed using generative AI are strictly ineligible for nomination."
Do they count procedural level generation as generative AI? Am I crazy that this doesn't seem clear to me?
No, we're at the phase with AI where people have extremely strong feelings, it's not well understood, and the definitions are not clear. I am with you in that rules like this seem dogmatic and hard to understand the implications of.
Rather than going into a huge rant about this, let me just give a quick anecdote.
It used to be there were tons of websites, like textures.com, which curated a huge database of textures, usable by art professionals and hobbyists alike. Some of it was free, others you had to pay for, both generally speaking, it wasn't too expensive, and if you picked up 3d modeling as a hobby, you could produce pretty decent results without spending a dime.
Then came the huge companies (you know which ones) which slurped up all these websites, and turned them into these SaaS monstrosities, with f2p mechanics. Textures were no longer free, but you had to pay in 'tokens' which you got from a subscription, which pushed you into opaque pricing models, bundling subscriptions, accidental yearly signups with cancellation fees, you know the drill.
Then came AI, which is somehow fair use, and instead of having to pay for that stuff, you could ask SD to generate a tiling rock texture for you.
Is this blatant copyrightwashing? I'd argue yes. But in this case, does copyright uphold any morally supportable princible, or does it help artists get paid?
People were against steam engines, tractors, CGI, self-checkouts, and now generative AI. After some initial outrage, it will be tightly integrated into society. Like how LLMs are already widely used to assist in coding.
Or not. Unlike all of the above, AI directly conflicts with the concept of intellectual property, which is backed by a much larger and more influential field.
regardless of the definition of the word. i don't think anybody would call a game indie that has tens of millions in production budget, over 300 developer working on it and a movie deal before it was even released.
What's the maximum developer count? Do outsourced assets count, if so, how? By the amount of people who directly worked on the assets by the outsource company or the whole headcount?
just admit you have no idea about indie games. for many years now it has been clear what is NOT an indie game and 7 figure production budget (marketing not included, most indie games don't even have those outside of social media) with hundredth of people working on it is exactly that. just compare them to any other indie game before if you want to educate yourself about something before posting.
they had a whole orchestra of the size of whole indie game studios for the music alone, does that seem like indie?
>Of course I think we should study art! Why yes of course I studied the greats to hone my skills, sometimes even copying their work directly to strengthen a specific skill set!
>Ai "studies" prior works to hone it's skillset...
"While the assets in question were patched out, it still goes against the regulations we have in place. As a result, the IGAs nomination committee has agreed to officially retract both the Debut Game and Game of the Year awards"
Looks like "regulations nitpicking". In the end it doesn't represent the players best interests.
True. Especially indie game awards. That have the least resources available and most like would benefit most from some use of AI. At that scale often even reasonably paid game developers are expensive.
“Whoops! We forgot to disclose that we ripped off thousands of other artists when we made our game. But don’t worry, it was only for placeholder stuff!”
Why is usage of AI even a discussion point? Steam also now enforces publishers to disclose if they used AI during game creation. It is a tool, and as a consumer I judge the end product. I don't care what tools were used in the production, just as I don't care if you use Photoshop, Pixelmator, Maya, 3DSMax or whatnot. The end result is what counts. And if the end result is full of bullshit AI slop and is not fun to play, don't give them an award. I played Claire Obscure and it is an absolut stunning and beautiful game.
These things will keep happening and the bar to be against certain use cases of AI will shift gradually over time.
Before we know it we will have entrusted a lot to AI and that can be both a good or a bad thing. The acceleration of development will be amazing. We could be well on our way to expand into the universe.
LOL this is beyond idiotic. Banning AI-generated assets from being used in the game is a red line we could at least debate.
But banning using AI at all while developing the game is... obviously insane on its face. It's literally equivalent to saying "you may not use Photoshop while developing your game" or "you may not use VS Code or Zed or Cursor or Windsurf or Jetbrains while developing your game" or "you may not have a smartphone while developing your game".
Interesting. I don't do games, so I may be wrong, but I thought a lot of Unreal Engine devs used Jetbrains. So what editors do they use? Are there current IDEs or code editors shipping in 2025 that don't have any LLM-based coding features?
OK, maybe my point got lost because I didn't know that, but I should have just added Visual Studio to my list — it too has LLM and agentic features, which was my point.
If you can't use LLMs to generate placeholder graphics that don't ship in the actual game, then why can you use coding editors that let you use LLMs to generate code?
If LLMs were simply a niche but somewhat useful technology people could choose to use or avoid, then sure, such an absolutist stance seems excessive. But this technology is being aggressively pushed into every aspect of our lives and integrated into society so deeply that it can't be avoided, and companies are pushing AI-first and AI-only strategies with the express goal of undermining and replacing artists (and eventually programmers) with low quality generic imitations of their work with models trained on stolen data.
To give even an inch under these circumstances seems like suicide. Every use of LLMs, however minor, is a concession to our destruction. It gives them money, it gives them power, it normalizes them and their influence.
I find the technology fascinating. I can think of numerous use cases I'd like to explore. It is useful and it can provide value. Unfortunately it's been deployed and weaponized against us in a way that makes it unacceptable under any circumstances. The tech bros and oligarchs have poisoned the well.
I mean, I share some of the concerns you expressed, but at the same time there is no chance at all that working programmers and artists won't be using LLMs (and whatever "AI" comes next).
I'm a programmer, and I enjoyed the sort of "craftsman" aspect of writing code, from the 1990s until... maybe last year. But it's over. Writing code manually is already the exception, not the rule. I am not an artist, and I also really do understand that artists have a more legitimate grievance (about stealing prior art) than we programmers do.
As a practical matter, though, that's irrelevant. I suspect being an "artist" working in games, movies, ads, etc will become much like coding already is: you produce some great work manually, as an example, and then tell the bots "Now do it like this ... all 100 of you."
It’s like banning any and all uses of chainsaws for any kind of work ever just because some bros juggle with them and have chainsaw juggling conventions.
It’s just a tool, but like any tool it can be used the right way or wrong way. We as a society are still learning which is which.
More like banning automated forest clearing power chainsaw robots because they mowed down all the forests killing most life on earth. That's a terrible analogy, but better than yours.
"Existing outside of the traditional publisher system, a game crafted and released by developers who are not owned or financially controlled by a major AAA/AA publisher or corporation, allowing them to create in an unrestricted environment and fully swing for the fences in realizing their vision."
In other words, "indie" means a developer-driven game independent of the establishment. It doesn't necessarily imply a low budget or the lack of professional experience.
The reason I hate AI generated code is because it's low quality and a pain to review. Similarly, AI generated images and videos are also low quality and not worth paying attention to.
If a game is well made and people enjoy it then what's the problem with utilising AI generated code or assets? What's the objective?
Really makes it clear how ridiculous the mania about AI usage is.
The game is great and there is absolutely nothing in it that would suggest to the player that AI was used for anything.
Putting essentially arbitrary limitation on which tools game developers are allowed to use is just nonsensical.
Yes, the output of AI models can be really bad, but then a game obviously does not deserve an award.
Especially for an indie game, with limited resources, AI can be a huge force multiplier. Gatekeeping awards based on these meaningless characteristics seems just very strange.
We should probably strip the Indie Game Awards of their awards show because the accountant used ChatGPT.
If they want to ban AI from their show that is their perogative, but considering that every nominee probably used AI somewhere (I'd bet money on this), this feels like blatantly dishonest posturing.
You think many are built without any assistance for coding? My impression was that people were mostly concerned about game assets like graphics and music
I think many are built without the use of gen ai to create assets. Obviously, the term "AI" is flexible enough that you could clarify every piece of software as involving AI if you wanted to, but I don't think that's productive.
I would assume that if a tool is there and the alternative too costly that they would use the tool instead of buring their project. Just today I stumbled over this for example, where they use GenAI as well: https://reddit.com/comments/1prqfsu
Not for coding, but today I stumbled upon these two building their passion project using GenAI, which would otherwise perhaps not be possible: https://reddit.com/comments/1prqfsu
If you join a contest with a rule that doesn't make sense, lie about following the rule when you did not, and the contest disqualifies you, it's not the rule that's the problem, it's the lying about complying with the rule when you did not. Are you really so confused by this whole thign that you don't get that or are you just another Big Tech AI fanbro?
It’s interesting, because we have examples of other sects in the past that also opposed human progress through technology. History is repeating itself.
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
In that case, the neo-Luddites are worse than the original Luddites, then? Since many are definitely not "totally fine with the machines", and definitely do not confine their attacks only on the manufacturers that go against worker rights, but they include the average person in their attacks. And the original Luddites already got a lot of hate for attempting to hold back progress.
I don't know about worse, but I think the situations are very similar. It's inaccurate to think the Luddites just hated technological advancement for the sake of it. They were happy to use machines; why wouldn't they be, if they had a back-breaking and monotonous job and the machine made it easier?
The issue is not the technology per se, it's how it's applied. If it eliminates vast swathes of jobs and drives wages down for those left, then people start to have a problem with it. That was true in the time of the Luddites and it's true today with AI.
It's unclear if Gen AI promotes any sort of human progress.
By all means, I use it. In some instances it is useful. I think it is mostly a technology that causes damages to humanity though. I just don't really care about it.
I think it's more the fact that they lied before nomination than the AI usage itself. Any institution is bound to disqualify a candidate if it discovers it was admitted on false grounds.
I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
Are you sure? A survey by the YouTuber Games And AI found that the vast majority of indie game developers are either using, or considering using AI. Like around 90%.
This is just one example, but today I found this where two people build their passion project using GenAI for image generation (+ photoshop), maybe otherwise this project wouldn't even be possible: https://reddit.com/comments/1prqfsu
Only when it comes to graphics/art. When it comes to LLMs for code, many people do some amazing mental gymnastics to make it seem like the two are totally different, and one is good while the other is bad.
> That said, the AI restriction itself is hilarious. Almost all games currently being made would have programmers using copilot, would they all be disqualified for it? Where does this arbitrary line start from?
AI OK: Code
AI Bad: Art, Music.
It's a double standard because people don't think of code as creative. They still think of us as monkeys banging on keyboards.
It is silly, considering there is obviously much higher chance that code-generating LLM generates copy of existing copyrighted code than image-generating diffusion model generates copy of existing copyrighted image.
> It's a double standard because people don't think of code as creative.
It's more like the code is the scaffolding and support, the art and experience is the core product. When you're watching a play you don't generally give a thought to the technical expertise that went into building the stage and the hall and its logistics, you are only there to appreciate the performance itself - even if said performance would have been impossible to deliver without the aforementioned factors.
I would disagree, code is as much the product in games as the assets.
Games always have their game engine touch and often for indie games it's a good part of the process. See for example Clair Obscur here which clearly has the UE5 caracter hair. It's what the game can and cannot do and shapes the experience.
Then the gameplay itself depend a lot on how the code was made and iterations on the code also shape the gameplay.
A blanket ban is the way to go on this, people trying to muddy the waters professing they just have nuanced opinions know what they are doing... it's only a horse armour pack, it doesn't affect gameplay, you don't have to use it, you won't notice if it's not there...
After the huge impact on the PC gaming community, it's logical to despise AI and ban it from any awards. First cryptocurrencies pumped huge price raises on GPUs, then prices won't return to normal due to AI and now it's impacting RAM prices.
Next year a lot of families will struggle to buy a needed computer for their kids' school due to some multibillion techs going all-in.
Correct decision. Skidmarks were detected to be present and it wasn't sabotage or somesuch. That means no award as skidmarks are considered so unprofessional at that particular award that, at this point, there's a rule against them.
As an indie game developer the idiots who made this decision do not represent us and are completely detached from actual game production over the last 3 years.
For those who might care, we use generative AI as much as possible in every way possible without compromising our vision, this includes sound, art, animation, and programming. These are often edited or entirely redone (effectively placeholders). It's part of the process, similar to using procedural art generation tools like geometry nodes in Blender or fluid sim particles generators.
And btw, both UE5 and Unity now have gen AI features (and addons) that all developers can and will use.
https://english.elpais.com/culture/2025-07-19/the-low-cost-c...
> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
reply