Who. The fuck. Cares
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
This will be the headline a month later:
Cara's monthly active users down to a few thousands. Here's why.
This is why twitter will never die.
Cara has no passwords: you log in via Google or Apple
uhuh, no thanks
So much bad faith, I logged in just fine with a regular e-mail.
It's just a quote from the article, but good to know.
you can use your email
You have a problem with oauth?
A lot of people are trying to de-google.
I'm no federated-nazi and I welcome projects like Cara, but at the beginning there are always lots of subscriptions
I don't understand how this Glaze thing is supposed to stop AI being trained on the art.
It's not. It's supposed to target certain open source AIs (Stable Diffusion specifically).
Latent diffusion models work on compressed images. That takes less resources. The compression is handled by a type of AI called VAE. For this attack to work, you must have access to the specific VAE that you are targeting.
The image is subtly altered so that the compressed image looks completely different from the original. You can only do that if you know what the compression AI does. Stable Diffusion is a necessary part of the Glaze software. It is ineffective against any closed source image generators that have trained their own VAE (or equivalent).
This kind of attack is notoriously fickle and thwarted by even small changes. It's probably not even very effective against the intended target.
If you're all about intellectual property, it kinda makes sense that freely shared AI is your main enemy.
Not only is this kind of attack notoriously unstable, finding out what images have been glazed is a fantastic indicator for finding high-quality art that is the stuff you want to train on.
I doubt that. Having a very proprietary attitude towards one's images and making good images are not related at all.
Besides, good training data is to a large extent about the labels.
It pollutes the data pool. The rule of gigo (garbage in garbage out) is used to garbage the AI results.
Basically, it puts some imperceptible stuff in the image file's data (somebody else should explain how because I don't know) so that what the AI sees and the human looking at the picture sees are rather different. So you try and train it to draw a photorealistic car and instead it creates a lumpy weird face or something. Then the AI uses that defective nonsense to learn what "photorealistic car" means and reproduce it - badly.
If you feed a bunch of this trash into an AI and tell it that this is how to paint like, say, Rembrandt, and then somebody uses it to try to paint a picture like Rembrandt, they'll end up getting something that looks like it was scrawled by a 10-year-old, or the dogs playing poker went through a teleporter malfunction, or whatever nonsense data was fed into the AI instead.
If you tell an AI that 2+2=🥔, that pi=9, or that the speed of light is Kevin, then nobody can use that AI to do math.
If you trained Chat GPT to explain history by feeding it descriptions of games of Civ6 them nobody could use it to cheat on their history term paper. The AI would go on about how Gandhi attacked Mansa Musa in 1686 with all out nuclear war. It's the same thing here, but with pictures.
Right but, AFAIK glaze is targeting the CLIP model inside diffusion models, which means any new versions of CLIP would remove the effect of the protection
Nice try feds