3 Comments

Even "off the shelf" generative AI models that fully respect an artist's right to opt out can still be fine-tuned using a specific collection of images to deliberately imitate someone's style. I certainly support regulating these companies to make sure they respect artists' rights, but it is much more difficult to think of a solution to the problem of someone tuning a model locally on commodity hardware. Right now it takes a bit of technical knowledge, but I'm sure a consumer-grade app that can be run locally will have fine-tuning capabilities soon.

https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/

Expand full comment

https://arxiv.org/pdf/2302.04222.pdf

The only approach that seems robust against this would be for artists to add masking layers as described in this paper. I can see a few drawbacks already, though:

1. I think this will quickly become a game of cat-and-mouse between models as generative AI model developers build new models that may not have the weakness identified by this exploit.

2. It requires technical knowledge for artists to apply it to their work - also perhaps a solvable problem.

3. It requires them to (subtly) degrade the visual quality and integrity of their work just to thwart use of AI systems. I can easily imagine that as the arms race in (1) continues that visual degradation of the images could become more obvious in order to continue working effectively.

Expand full comment

This is yet another example of the big grift promulgated by the originators of Web/Internet 1.0. To solve this problem we need to go back and understand the root cause is lack of settlements which provide incentives and disincentives and distribute wealth equitably for sustainable and generative networked ecosystems. The principles around efficient settlement systems apply to all of humanity's networks; but are easily implemented for digital networked ecosystems. More here: http://bit.ly/2iLAHlG

Expand full comment