this post was submitted on 22 Sep 2024
1 points (100.0% liked)
Socialism
5256 readers
8 users here now
Rules TBD.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lots of efficiency improvements are possible for lots of tools but they often don't happen on a sufficiently large scale because of capitalism. It's why we have "just another lane, bro" stroads instead of viable mass transit across most of Burgerland, for example.
I highly doubt bazinga-Americans, from ruling class billionaires to their stans and glazers, are that interested in efficiency when they feverishly demand ascended techno-gods to emerge from sufficiently large treat printer databases. One such glazer is even in this thread, right now.
In this case, I think we are going to see such improvements because there's a direct benefit to companies operating LLMs to save costs. It's also worth noting that a lot of the improvements are happening in open source space, and I firmly believe that's how this tech should be developed in the first place.
I find complaining about the fact that generative models exist isn't really productive. There's no putting toothpaste back in the tube at this point. However, it is valuable to have discussions regarding how this tech should be developed and used going forward.
One more thing: you may want to look at the numbers for just how vastly extensive and wasteful current "AI" usage is among tech companies and how much more they intend to expand its use, whether people ask for it or not, pretty much everywhere.
If you haven't heard of the Jevons Paradox, it also helps explain why increasingly efficient gasoline engines haven't actually reduced overall carbon waste because more and more of those more efficient gasoline engines were used all the while.
https://en.wikipedia.org/wiki/Jevons_paradox
I'm well aware of Jevons Paradox, however what it says is that we'll always find new use for energy surplus. If it wasn't LLMs then it would just be something else. There's nothing uniquely bad about AI, it's just a technology that can be used in a sensible way or not. The thing we need to be focusing on is how we structure our society to ensure that we're not using technology in ways that's harmful to us.
Again, I'm not down with inevitabilism arguments. May as well say the Joad's house was going to get torn down somehow too.
If one believes nothing can or even should be done about destructive excesses of capitalism, where's the leftism part even begin?
There actually is considering the jobs and consequent material conditions affected by it that were otherwise unaffected before its use. Just saying it's all the same sounds like downright drilposting.
No shit. Same deal with CFCs, high fructose corn syrup, partially hydrogenated soybean oil, and leaded gasoline. Saying "do nothing, it's inevitable and no different than anything before and it can't be helped" yet also "restructure society" is downright paradoxical to me here.
Well you brought up Jevons paradox here, which kind of is an inevitabilist argument. My view is simply that Jevons paradox is an observation of how capitalist system operates, and as long as this system of relations remains in place we will see problems with how technology is used.
I think I was very clear that I think that destructive excesses of capitalism are precisely the problem here. What I continue to point out that, that's a completely separate discussion from whether LLMs exist or not.
The jobs and consequent material conditions are affected by the capitalist system of relations and how it uses automation in ways that are hostile to workers. Automation itself is not the problem here.
Nowhere did I say do nothing. What I actually said repeatedly is that you're focusing on the wrong thing and that I don't see technology itself as the problem.
I'm not so sure, not when a lot of venture capital money often rides on grandiose promises to dazzle investors (including vague promises of nuclear fusion payback from a startup in four years in Microsoft's case)
Considering the already present socioeconomic consequences of this unregulated technology, from career/reputation threatening deepfaking to further working class precarity, saying "nothing can be done" in response to such harm sounds like tech inevitabilism to me. Should the same be argued about the worsening surveillance state (which is also being boosted with this technology)? Would it have been worthwhile to say nothing could be done about, say, CFCs, high fructose corn syrup, partially hydrogenated soybean oil, or leaded gasoline? Saying "this product is doing bad things but oh well it's already invented" is tiresome fatalism to me.
Again the issue here is with capitalism not with technology. I personally don't see anything uniquely harmful that's inherent in LLMs, and I think that it's interesting technology that has a lot of legitimate uses. However, it's clear to me that this tech will be used in horrible ways under our current economic system just like all other tech that's used in horrible ways already.
I'm not being fatalistic at all, I just think you're barking up the wrong tree here.