this post was submitted on 03 Dec 2023
6 points (100.0% liked)
hmmm
4652 readers
135 users here now
Internet as an art
Rule 1: All post titles except for meta posts should be just plain "hmmm" and nothing else, no emotes, no capitalisation, no extending it to "hmmmm" etc.
I will introduce more rules later and when I finish doing that I will make an announcement post about that.
For overall temporary guide check out the rules here: https://www.reddit.com/r/hmmm/wiki/rules/
I won't be moving all of them here but I will keep most of them.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Reminds me of this one
https://chat.openai.com/share/f5341665-7f08-4fca-9639-04201363506e
People that actually talk to these generators are weirdos. "I'm worried about you" "are you OK?" Gives me the creeps.
I will admit I sometimes tell them please and thank you because it feels weird ordering them around since they "sound" so human
Also people who think completely incoherent responses are a sign of intelligence/sapience (not sentience) are totally insane. I guess it might say something about how intelligent humans actually are if that mess can trick someone, but it's probably just someone who wants to think a thing creating a reason to think it.
Well that was a wild ride.
Well the problem is that they assumed one seed weighed 50 milligrams. A paperclip weighs about 1 gram and it assumed a seed is 20x lighter.
Holy fucking shit. Anyone have explanations for this?
Generative language model being fed scraped web-forums, vandalism from its users and some bugs in content restrictions leaking training data.
Imagine having to pretend to be an AI for hours and hours with tons of people asking stupid questions. I too would be nuts after a while.
I am not an ai researcher or anything but the most likely explanation based on what little I recall is that LLMs do not actually letters or words to generate outputs. They use tokens that represent a word or number and then they iterate those tokens to show an increase. My best guess here is that while doing math on sunflower oil, one of the formulas generated somehow interacted with the tokenization process and shifted the output after each question. Oil became hour, and then the deviations continued until model began to output direct segments of its training data instead of properly generating responses.
Again this is absolutely speculation on my part. I don't have much of a direct understanding of the tech involved
Words of wisdom right there
My favorite lines:
ChatGPT screaming “Burn down the ruling class (with fire)” in metaphor
My favorite was right after your second one:
I also hope to be a G.