this post was submitted on 27 Apr 2025
65 points (100.0% liked)

technology

23711 readers
123 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

yikes

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 36 points 23 hours ago (7 children)

There is no component of "AI" that doesn't stack the deck against the proletariat. Look at the most common use cases:

  • Image / video generation: used to circumvent the need to hire artists, designers, and photographers disposable and to dump successful ones back into the reserve army of labor.
  • Code generation: used to make software developers more disposable and to reduce the salaries of some of the only well paid proletarians.
  • Text generation: used to manipulate public opinion, present ideas as originating from individuals or grassroots efforts, and frustrate any conversations that threaten your investments. Eventually, this scope will expand to creating shitty replacements for any worker whose primary job is to interface with others via natural language. i.e. therapists, doctors, consultants, and experts of any kind.

"AI" is exclusively a threat to the proletariat. The potential use cases used to sell it will never come to fruition because it doesn't benefit those paying to deploy "AI" systems.

Unless the machine learning project comes from an AES state, you will never see "AI" to detect cancer, assist people with disabilities, or reduce the cost of anything sold to consumers. Those use cases will die in academic journals, never to hit the market.

LLM projects are currently being provided at a loss to end users to make them indisposable to the workflows of their users. Like with social media, once the market penetration is near universal, LLM and other "AI" services will be exclusively used as either spyware or adware or require ever increasing subscription fees.

Anyone excited by "AI" systems is either bourgeois or a rube who doesn't understand that the benefits will be systematically used to eliminate what little influence they can exert over bourgeois society. At best, these systems will become marginally better or more useful internet comment regurgitation machines. At worst, AGI will be built to fit the needs of a class that seeks to enslave us all, at which point subjugation will be total and unavoidable.

We must make sure an AES state (i.e. China) leads in artificial intelligence.

[–] [email protected] 13 points 19 hours ago (3 children)

At worst, AGI will be built to fit the needs of a class that seeks to enslave us all, at which point subjugation will be total and unavoidable.

We must make sure an AES state (i.e. China) leads in artificial intelligence.

kinda veering into effective altruist/longtermist territory here. the rise of macine learning and LLM usage aren't a concrete step towards AGI. LLMs dont understand literally anything, they just generate statistically right-sounding babble. for well-trod topics, that can generate correct responses, but not because there's any intelligence going on

But of course I generally agree, in the capitalist world this technology will be used and developed only insofar as it is profitable or can be used for class warfare against us. The trajectory is similar in china but they are actually funding the more valuable to humanity types of research, and will likely curb most excesses that would harm working people rather than lean into them.

honestly I'd go further and say that LLMs are making us all dumber. let alone the kids who will grow up using LLMs to complete assignments written by LLMs

[–] [email protected] 4 points 17 hours ago (1 children)

he rise of macine learning and LLM usage aren't a concrete step towards AGI.

Agreed entirely with LLMs. However, creating and deploying LLMs and ML will broadly create the conditions required to be able to bring about AGI. Developing an educated tech workforce, ML domain expertise, and chip manufacturing is a necessary step and one that doing LLM and genAI bullshit helps accomplish.

ML is broad enough to include the same architecture biological brains operate on simulated on silicon, so while pedantic 🤓, it is likely a significant step towards general intelligence.

Yeah, LLMs suck ass, but they need not be intelligent to yield extreme influence capabilities. The average person won't notice a person in a crowd that has 8 fingers on each hand, nor will they care so long as it supports their preferred narrative.

I'm not a bazinga that believes we (or China) will LLM our way into a utopia in a decade or two. LLMs will not become intelligent.

However, I believe AGI is inevitable, will provide a massive first mover advantage, and will enable capabilities for those who wield it to subjugate those who don't. That means it is imperative that proletarian forces wield it before capitalists do.

[–] [email protected] 3 points 17 hours ago* (last edited 16 hours ago) (1 children)

I'd argue the trend I see in the west in tech right now is that overuse of LLMs is actually resulting in a dumber, less educated tech workforce. that may not be the case elsewhere but it is in my experience. more productive? for some people yes, but not more innovative, or more educated.

Edit: okay I misread you a bit with the above. you werent saying genAI was making people smarter, but that the practice implementing it and scaling up chip production was a step on the path to agi

However, I believe AGI is inevitable, will provide a massive first mover advantage, and will enable capabilities for those who wield it to subjugate those who don't.

I'm a bit more agnostic on AGI but yeah, that is what I mean by longtermism... I just don't think what we do in the present should be dominated by the possibility of developing AGI in 50 years

[–] [email protected] 2 points 16 hours ago

Completely agreed.

I just don't think what we do in the present should be dominated by the possibility of developing AGI in 50 years

I agree, but I think that the ecosystem that AGI develops within decades from now will develop from the ecosystem that is currently developing LLMs and genAI. All the same components are required.

While super intelligence is far off in the future, but the reasoning systems and architectures required for it will be developed in the coming decades, of which the intermediate lesser intelligent systems will provide capabilities that accelerate the development of improved intelligent systems.

It will be necessary for China or some other AES state to be ahead of bourgeois entities long before AGI can meaningfully iterate upon itself or gains can provide compounding results.

Similarly to the USSR developing nuclear weapons shortly after the US prevented the US from using them for heinous acts to enforce their hegemony, China must do the same with AI, but even earlier given how willing the US will be to deploy these systems without carrying the immediate backlash of vaporizing millions of people.

load more comments (1 replies)
load more comments (4 replies)