this post was submitted on 22 May 2024
1 points (100.0% liked)
GenZedong
4302 readers
8 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Opposing AI per se is a wrong take
I oppose all of this shit because it requires an unfathomably large and unsustainable level of power consumption to, well, sustain. It is the definition of wasteful decadence during a moment in time where we really, really cannot afford it. I wonder why it is this particular grift everyone wants to tell me is totally nuanced and complicated (regardless of the veracity of such claims) when the long and short of it is that we just do not fucking need any of it.
For real, I feel like 99.9% of what people say are "AI problems" (datamining, polluting the web) can be attributed mainly to our rotten late-capitalist society and the fact that the entities who are developing said AIs are for-profit companies. In China we see AI used for good, mainly in industry, because it's actually well-regulated and not entirely left in the hands of oligarchs.
IMO AI (not only GPT chatbots) can be extremely beneficial to society, if just we abolished the profit motive.
It's a confusing thing to grapple with, partly because of AI having become such a marketing term rather than a precise or practical one. If we included a washing machine's process under the umbrella of AI, I think most would agree it's fine. But then there is generative AI which is drawing a lot of the current AI hate and some of it for good reason. People understandably fear being replaced as artists or authors by AI. There are also concerns such as online marketplaces for these things being flooded by generated content and making them impossible to use for anyone. But in practice, the points are not all against AI. Some people have gotten back into creative writing more effectively because of AI assistance. Some people have gotten therapeutic benefit from chatbot AI or helped them with loneliness. And like, loneliness is not a problem we should have as such a pervasive thing, but while it is a problem, generative AI is stepping in to help with harm reduction.
So there are nuances to it and it's one of these things where spending time around people who use it can be very important to understand how it is being used and what benefits and drawbacks are in practice, not just in theory. I have seen this somewhat through personal observation. I've also encountered a lot of variation in how people feel about generative AI, even among those who use it. Some people, for example, are okay with text generation, but dislike image generation; which is somewhat understandable, as text generation is designed as something that goes back and forth, and image generation is more a thing where you put in a prompt and get the result and that's it unless you edit it further.
I agree with your concluding statement, with the add-on that I think we need to evaluate to the best of our ability at each step what the benefits and drawbacks are, and how to integrate the tech in a way that has overall benefit. In other words, not just absence of profit motive, but presence of thinking about it as "how can it help?" not just "is it scientifically possible to make it do this?" Of course, in a country like the US, that is mostly hypothetical without having the levers of power. But if we are speaking to how to approach AI, given conditions where we can make collective decisions about it.
I've struggled with bears about this before
https://hexbear.net/comment/3464576
https://hexbear.net/post/268073
What's your avatar from?
Vampiros En La Habana