blame

joined 10 months ago
[–] [email protected] 0 points 1 week ago* (last edited 1 week ago) (1 children)

same as it ever was. Instead of rotting, removed aristocrats we have rotting, removed bourgeoisie.

also i didn't realize the word that got removed would be removed. Corrupt maybe? Idk whatever i'm just posting over here.

[–] [email protected] 0 points 1 week ago (1 children)

This really belongs here, what a stinker.

[–] [email protected] 0 points 1 week ago

What can you really say about this guy that hasn't already been said

[–] [email protected] 0 points 1 week ago

authoritarianism is approached asymptotically, we never actually reach it even if we can be infinitely close.

[–] [email protected] 0 points 2 weeks ago

was gonna say i dont have a lot of faith in that angle

[–] [email protected] 0 points 2 weeks ago

I'm just guessing but likely they are training or instructing it in such a way that it will defer to sources that it finds through searching the internet. I guess the first thing it does when you ask a question is it searches the internet for recent news articles and other sources and now you have the context full of "facts" that it will stick to. Other LLMs haven't really done that by default (although now I think they are doing that more) so they would just give answers purely on their weights which is basically the entire internet compressed down to 150 GB or whatever.

[–] [email protected] 0 points 2 weeks ago (1 children)

felony murder

so in other words the cops killed someone and he was nearby?

[–] [email protected] 0 points 2 weeks ago (6 children)
[–] [email protected] 0 points 2 weeks ago

Wow he doesn't even have to look for locations to open stores now! Thanks for the kind donation, anonymous billionaire!

[–] [email protected] 0 points 2 weeks ago (3 children)

LLMs don't have any sort of logical core to them really.. At least not in the sense that humans do. The causality doesn't matter as much as the structure of the response, if I'm describing this right. Like a response that sounds right and a response that is right are the same thing, the LLM doesn't differentiate. So I think what the grok team must have done is added some system prompts or trained the model in such a way that it is strongly instructed to weigh its responses favoring things like news articles and wikipedia and whatever else over what the user is telling it or asking it.

[–] [email protected] 0 points 2 weeks ago

hit it with the ol' upside down pot.

[–] [email protected] 0 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

They're arguing with a fucking language model

Losing the argument, too. Gotta hand it to the Grok team in one way though, the model does seem to stand its ground. Some of the other ones will just be like "you're absolutely right!" and then give you the answer you want

view more: ‹ prev next ›