

I think different people have different aversions to why they don’t like or want to use AI.
In the case of “automatic” “filters” on pictures taken on phones, this is or was called computational photography. Over time more capabilities were added to these systems until we now have the moon situation and the latest NN processing.
If someone only cares about environmental impact, then that doesn’t really apply in this case if the processing happens on device, since by definition a phone is low power and thus doesn’t consume water for cooling or much power for compute.
However, some people care about copying, for numerous and possibly conflicting reasons. Generating assets might violate their sense that IP was stolen, since it’s a pretty well known fact that that these models were created in large part with dubiously licensed or entirely unlicensed works. I think a reasonable argument can be made that the algorithms that make LLMs work parallels compression. But whatever the case, the legality doesn’t matter for most people’s feelings.
Others don’t like that assets are generated by compute at all. Maybe for economic or political reasons. Some might feel that a social contract has been violated. For example, it used to be the case that on large social media, you had some kind of “buy in” from society. The content might have been low quality or useless drivel, but there was a relativly high cost to producing lots of content, and the owners of the site didn’t have direct or complete control of the platform.
Now a single person or company can create a social media site, complete with generated content and generated users, and sucker clueless users into thinking it’s real. It was a problem before, various people getting sucked into an echo chamber of their peers, now it is likely to happen that there will be another set of users get sucked into an entirely generated echo chamber.
We can see this happening now. Companies like openai are creating social media sites (“apps” as they call them now) filled only with slop. There are even companies that make apps for romance and dating virtual or fake partners.
Generated content is also undesirable for some users because maybe they want to see the output of a person. There is already plenty of factory bullshit on the various app stores, why do they need or want the output of a machine when there is already existing predatory content out there they could have now.
Some people are starting to wake up to the fact that they have only a single life. Chasing money doesn’t do it for most. Some find religion, others want to achieve and see others achieve. Generating content isn’t an achievement of the person initiating the generation. They didn’t suffer to make it. A person slaves away in art school for years only to take a shit job they looked up to for years, then doing the best work they can under crazy pressure is an achievement.
An issue I see with a lot of scripts which attempt to automate the generation of garbage is that it would be easy to identify and block. Whereas if the poison looks similar to real content, it is much harder to detect.
It might also be possible to generate adversarial text which causes problems for models when used in a training dataset. It could be possible to convert a given text by changing the order of words and the choice of words in such a way that a human doesn’t notice, but it causes problems for the llm. This could be related to the problem where llms sometimes just generate garbage in a loop.
Frontier models don’t appear to generate garbage in a loop anymore (i haven’t noticed it lately), but I don’t know how they fix it. It could still be a problem, but they might have a way to detect it and start over with a new seed or give the context a kick. In this case, poisoning actually just increases the cost of inference.