this post was submitted on 11 Aug 2024
263 points (95.5% liked)

Technology

58063 readers
3097 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 1 month ago

It's a good thing that ChatGPT is only one of the many LLM's to choose from.

[–] [email protected] 13 points 1 month ago

I'm inclined to believe that they're throwing all prompts and outputs into a db and searching that.

[–] [email protected] 24 points 1 month ago (1 children)

As someone who fiddled with Stable Diffusion which also has optional invisible watermarks this is a good feature. It is so that AI training will avoid content marking itself as AI generated. If people want to hide that their content is AI generated then, sadly, it's harder to detect.

[–] [email protected] 34 points 1 month ago (2 children)

Watermarking everything I digitally publish to keep my original content out of a training set.

Publishing a website full of de-watermarked AI slop to ruin future LLMs.

[–] [email protected] 2 points 1 month ago (1 children)

Arent there better methods to poison AI?

I have heard Glaze and Nightshade are good, but have never used them

[–] [email protected] 3 points 1 month ago

They're getting out of date already because newer models are catching up on them. It's a cat and mouse game that will likely never end.

[–] [email protected] 6 points 1 month ago

More info if you're seriously considering it. https://codoraven.com/blog/ai/stable-diffusion-the-invisible-watermark-in-generated-images/

I don't actually know if any model creators check for the watermark or not.

[–] [email protected] 42 points 1 month ago* (last edited 1 month ago) (1 children)

They could inject random zero width non joiners to help detection too. Easy to defeat, but something a layperson would have to go through extra effort to filter out. Kinda like how some plagiarism cases have been won by pointing out identical misspelled words.

[–] [email protected] 6 points 1 month ago (1 children)

Yeah, no chance they'd rely on something that would be so easy to defeat. Watermarking by using word patterns is far more likely.

Still easy to defeat by just using another LLM to rephrase it though.

[–] [email protected] 4 points 1 month ago (1 children)

It's one of many things they could do just like how security is a layers thing.

[–] [email protected] 2 points 1 month ago

They could, but adding random zero width characters into words would also destroy ever spell checker, giving it away immediately and making sure that even unaware people would filter it. Doing it outside the words would leave them with too few spots to use for proper watermarking.

I think it's far more likely they'll use some kind of pattern in the tokens - that way the watermark will remain even when you don't copypaste it.

But yeah, as said, they will never tell how it's implemented, but it can still be simply subverted.

[–] [email protected] 14 points 1 month ago

What's the false positive rate tho

[–] [email protected] 87 points 1 month ago* (last edited 1 month ago) (5 children)

Am I the only one who rewrites most of ChatGPT's output into my own words because it's "voice" is garbage anyway? I ask it to write me a cover letter and that gives me a rough outline and some points to make, but I have to do massive editing to avoid redundancy, awkward phrasing, outright lies, etc.

I can't imagine turning in raw ChatGPT output. I had one of my developers use Bing AI to write code and submitted that shit raw and it was immediately obvious because some relatively simple code has really weird artifacts like overwriting a value that had no reason to even be touched.

[–] [email protected] 2 points 1 month ago

Yes, but I use chatgpt to do the rewrites too

[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (1 children)

i use it to make outlines which are usually very good and then I use the class materials to flesh out the outlines in my own words. All my words but ChatGPT told me what to include and in what order.

[–] [email protected] 5 points 1 month ago

That's valid. And I'd be surprised if that could be watermarked.

[–] [email protected] 5 points 1 month ago

For me I find it sounds too much like a marketing person or something I'd see in an ad or a website so I "dumb it down" a bit to make it not sound too corporate. Sometimes telling Chatgpt to do so fixes this though.

[–] [email protected] 4 points 1 month ago (2 children)

Lol. AI gonna take over the developers job. Like that's even close to happening.

[–] [email protected] 4 points 1 month ago (1 children)

LLMs aren't going to take coding jobs, there are specific case AIs being trained for that. They write code that works but does not make sense to human eyes. It's fucking terrifying but EVERYONE just keeps focusing on the LLMS.

There are at least 2 more dangerous model types being used right now to influence elections and manipulate online spaces and ALL everyone cares about is their fucking parrot bots....

[–] [email protected] 1 points 1 month ago (1 children)

Please elaborate for the uneducated

[–] [email protected] 1 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago (1 children)

Thanks, great read. Appreciate it. That was one example but you mentioned two - are you thinking of some of the broader disinformation applications in addition to the data gathering mentioned?

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

Look, I don't want to waste your time so let me tell you this is a subject I have been concerned about, researched, coded for, and posting about mass manipulation via AI since the 90s.

You can really be pedantic and nit-picky all you want, it really doesn't matter to me. AI is the second greatest existential threat we face as a species. If you haven't already been convinced at least to some degree of its danger, nothing I will say will change your mind anyway.

The most dangerous right now AI manifestation is in sentiment identification and control, the second is autonomous armed robots.

[–] [email protected] 1 points 1 month ago (1 children)

Thanks my dude. I was just asking you an honest question. Appreciate the information

[–] [email protected] 1 points 1 month ago (1 children)

Get bent in extradimensional vectors.

[–] [email protected] 1 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago

You seem blocked

[–] [email protected] 7 points 1 month ago (1 children)

Few years ago the output of GPT was complete gibberish and few years before that even producing such gibberish would've been impressive.

It doesn't take anyone's job untill it does.

[–] [email protected] 7 points 1 month ago (2 children)

Few years ago the output of GPT was complete gibberish

That's not really true. Older GPTs were already really good. Did you ever see SubredditSimulator? I'm pretty sure that first came around like 10 years ago.

[–] [email protected] 3 points 1 month ago

They were good for about a paragraph, maybe less.

As soon as they reached the attention limit they started talking gibberish.

[–] [email protected] 2 points 1 month ago

The first time I saw text written by GPT it all seemed alright at first glance but once you started to actually read it was immediately obvious it had no idea what it was talking about. It was grammatically correct nonsense.

[–] [email protected] 28 points 1 month ago

Idk it looks good to me. Straight to the main branch you go.

[–] [email protected] 5 points 1 month ago

so the AI is going to say 'like' every third word?

[–] [email protected] 4 points 1 month ago

About a dozen methods they could use https://arxiv.org/pdf/2312.07913v2

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Humans instinctively do something analogous with natural language, using poetic forms like rhyme, meter, and alliteration. (For example, the speeches from Shakespeare’s plays are immediately detectable because they’re in iambic pentameter.)

Imagine you lacked the natural human ability to detect verse, making poetry indistinguishable from prose. As far as you could tell, it would be like an invisible watermark that only specialists could detect. LLMs can use a similar approach, making up their own patterns that are opaque to humans but detectable to themselves.

load more comments
view more: next ›