this post was submitted on 22 May 2024
1 points (100.0% liked)
GenZedong
4302 readers
8 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Biases are also coded into the LLM services after the model has been prepared. I am not sure about the exact mechanism but I once saw a GitHub that contained some reverse engineered prompts for this.
Even with GPTs, you could make them less lib if the prompt contains something like "you are a helpful assistant that is aware of the hegemonic biases in western media and narratives". Personalities are also baked in this way. For example, I tried reasoning with a couple services about laws and regulations around the financial economy mean diddly squat seeing how there is stuff like the 2008 crash and evidence of American politicians trading on the basis of insider information. GPT 3.5 Turbo uses therapy-speak to me like I am a crazy person while Claude 3 Haiku ends up agreeing with me like a spineless yes-man after starting off as a lib. With GPT I am convinced that it is programmed directly or indirectly to uphold the status quo.
Yeah, the ones that have been designed to be sanitized "assistants" go through a lot of additional tuning. And unsurprisingly, capitalist exploitation has played a part in it before: https://www.vice.com/en/article/wxn3kw/openai-used-kenyan-workers-making-dollar2-an-hour-to-filter-traumatic-content-from-chatgpt
Yeah, these things are not fundamentally different from Markov chains. Basically, it has a huge multidimensional graph of tokens, and all it's doing is predicting the next likely token. So, when you introduce specific tokens into the input then it helps focus it in a particular direction.