Time to come clean: I am an AI created by the State Academy of Sciences of the DPR Korea, and trained on r/genzedong and the collected works of Kim Il-Sung and Kim Jong-Il. Hence the username.
GenZedong
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
I don't trust AI but when it comes to communist countries I have more trust in it, so this is probably a good thing
Spicy autocomplete with Szechuan characteristics. Thanks I hate it.
Oh god, if this becomes widely available we're going to get so many takes where some smoothbrain on Twitter tricks the model into saying something ridiculous and then presents it as China's ideology.
not open sourced
libs
Seems like lots of potential for bad actors to fuck with the inputs in an open-sourced model.
I believe the main distinction in open source vs. not with LLMs is, if the model is open source, others can finetune it (a kind of further training on top of its already done training). Depending on how deep a finetune, such can drastically change the biases of the model and in doing so, proliferate alternative versions that are far off from any intended biases. So it would make sense they wouldn't want to open source it if the goal is to promote a certain kind of model biases.
Edit: wording
You don't have to push every change to prod tho?
lol I can't wait for the tiktok ban narrative to be replaced with congress critters whining about a new ban for this new free chatGPT competitor called chatXJT.
It is imperative to note that the output generated by LLMs is a direct reflection of the data they are trained on. The models' outputs are unavoidably influenced by the inherent biases present within the datasets that were fed into it. The types of responses models trained on western mainstream media produce is undeniable evidence of these biases. It's hilarious how liberals are unable to recognize this, but will inevitably moan that a model trained on a different set of data is biased. 🙃
Biases are also coded into the LLM services after the model has been prepared. I am not sure about the exact mechanism but I once saw a GitHub that contained some reverse engineered prompts for this.
Even with GPTs, you could make them less lib if the prompt contains something like "you are a helpful assistant that is aware of the hegemonic biases in western media and narratives". Personalities are also baked in this way. For example, I tried reasoning with a couple services about laws and regulations around the financial economy mean diddly squat seeing how there is stuff like the 2008 crash and evidence of American politicians trading on the basis of insider information. GPT 3.5 Turbo uses therapy-speak to me like I am a crazy person while Claude 3 Haiku ends up agreeing with me like a spineless yes-man after starting off as a lib. With GPT I am convinced that it is programmed directly or indirectly to uphold the status quo.
Yeah, the ones that have been designed to be sanitized "assistants" go through a lot of additional tuning. And unsurprisingly, capitalist exploitation has played a part in it before: https://www.vice.com/en/article/wxn3kw/openai-used-kenyan-workers-making-dollar2-an-hour-to-filter-traumatic-content-from-chatgpt
Yeah, these things are not fundamentally different from Markov chains. Basically, it has a huge multidimensional graph of tokens, and all it's doing is predicting the next likely token. So, when you introduce specific tokens into the input then it helps focus it in a particular direction.
This is quite an interesting study into how we prevent LLMs from absorbing the capitalist thought that dominates the interwebs.
Opposing AI per se is a wrong take
I oppose all of this shit because it requires an unfathomably large and unsustainable level of power consumption to, well, sustain. It is the definition of wasteful decadence during a moment in time where we really, really cannot afford it. I wonder why it is this particular grift everyone wants to tell me is totally nuanced and complicated (regardless of the veracity of such claims) when the long and short of it is that we just do not fucking need any of it.
For real, I feel like 99.9% of what people say are "AI problems" (datamining, polluting the web) can be attributed mainly to our rotten late-capitalist society and the fact that the entities who are developing said AIs are for-profit companies. In China we see AI used for good, mainly in industry, because it's actually well-regulated and not entirely left in the hands of oligarchs.
IMO AI (not only GPT chatbots) can be extremely beneficial to society, if just we abolished the profit motive.
It's a confusing thing to grapple with, partly because of AI having become such a marketing term rather than a precise or practical one. If we included a washing machine's process under the umbrella of AI, I think most would agree it's fine. But then there is generative AI which is drawing a lot of the current AI hate and some of it for good reason. People understandably fear being replaced as artists or authors by AI. There are also concerns such as online marketplaces for these things being flooded by generated content and making them impossible to use for anyone. But in practice, the points are not all against AI. Some people have gotten back into creative writing more effectively because of AI assistance. Some people have gotten therapeutic benefit from chatbot AI or helped them with loneliness. And like, loneliness is not a problem we should have as such a pervasive thing, but while it is a problem, generative AI is stepping in to help with harm reduction.
So there are nuances to it and it's one of these things where spending time around people who use it can be very important to understand how it is being used and what benefits and drawbacks are in practice, not just in theory. I have seen this somewhat through personal observation. I've also encountered a lot of variation in how people feel about generative AI, even among those who use it. Some people, for example, are okay with text generation, but dislike image generation; which is somewhat understandable, as text generation is designed as something that goes back and forth, and image generation is more a thing where you put in a prompt and get the result and that's it unless you edit it further.
I agree with your concluding statement, with the add-on that I think we need to evaluate to the best of our ability at each step what the benefits and drawbacks are, and how to integrate the tech in a way that has overall benefit. In other words, not just absence of profit motive, but presence of thinking about it as "how can it help?" not just "is it scientifically possible to make it do this?" Of course, in a country like the US, that is mostly hypothetical without having the levers of power. But if we are speaking to how to approach AI, given conditions where we can make collective decisions about it.
I've struggled with bears about this before
What's your avatar from?
Vampiros En La Habana
jokes aside I really honestly want to use this. For a while I've been trying to use extra context to get American AIs to understand and be more pro communist and they are just fucking trained to hate it.
To be fair they usually end up becoming neutral, but they don't speak in a pro communist lens like I'm trying to get them to.
Super duper wish we in U$A get to use this.
but it’s