this post was submitted on 02 Sep 2024
807 points (92.9% liked)

solarpunk memes

2594 readers
851 users here now

For when you need a laugh!

The definition of a "meme" here is intentionally pretty loose. Images, screenshots, and the like are welcome!

But, keep it lighthearted and/or within our server's ideals.

Posts and comments that are hateful, trolling, inciting, and/or overly negative will be removed at the moderators' discretion.

Please follow all slrpnk.net rules and community guidelines

Have fun!

founded 2 years ago
MODERATORS
 
(page 2) 42 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 weeks ago (3 children)

Ok. Been thinking about this and maybe someone can enlighten me. Couldn't LLMs be used for code breaking and encryption cracking. My thought is language has a cadence. So even if you were to scramble it to hell shouldn't that cadence be present in the encryption? Couldn't you feed an LLM a bunch of machine code and train it to take that machine code and look for conversational patterns. Spitting out likely dialogs?

[–] [email protected] 8 points 2 weeks ago (1 children)

That would probably be a task for regular machine learning. Plus proper encryption shouldn't have a discernible pattern in the encrypted bytes. Just blobs of garbage.

[–] [email protected] 2 points 2 weeks ago

Thanks for the reply! I'm obviously not a subject matter expert on this.

[–] [email protected] 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Could there be patterns in ciphers? Sure. But modern cryptography is designed specifically against this. Specifically, it's designed against there being patterns like the one you said. Modern cryptographic algos that are considered good all have the Avalanche effect baked in as a basic design requirement:

https://en.m.wikipedia.org/wiki/Avalanche_effect

Basically, using the same encryption key if you change one character in the input text, the cipher will be completely different . That doesn't mean there couldn't possibly be patterns like the one you described, but it makes it very unlikely.

More to your point, given the number of people playing with LLMs these days, I doubt LLMs have any special ability to find whatever minute, intentionally obfuscated patterns may exist. We would have heard about it by now. Or...maybe we just don't know about it. But I think the odds are really low .

[–] [email protected] 3 points 2 weeks ago

Very informative! Thank you.

load more comments (1 replies)
[–] [email protected] 7 points 2 weeks ago (3 children)

I am on an internship with like really nice people in a company that does sustainable stuff.

But they honestly have a list of AI tools they plan to use, to make automated presentations... like wtf?

load more comments (3 replies)
[–] [email protected] -3 points 2 weeks ago

The answer to all these questions is actually yes but sure invent absurd conspiracy throes if you want to feel smart or whatever, you're literally the same as the antivaxxers and 5gphobes but whatever i guess, if it makes you feel superior.

[–] [email protected] 43 points 2 weeks ago (2 children)

Eh, most of the marketing around ai is complete bullshit, but I do use it on a regular basis for my work. Several years ago it would have just been called machine learning, but it saves me hours every day. Is it a magic bullet that fixes everything? No. But is it a powerful tool that helps speed up the process? Yes.

[–] [email protected] 8 points 2 weeks ago (3 children)

Who is getting the reward for speeding up your work? Do you get to slack off more? How long will that last? Or does more work get piled on, making your employer richer not you?

[–] [email protected] 14 points 2 weeks ago

Not a problem of the AI

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 5 points 2 weeks ago

Most AI is being developed to try to sustain the need for content for social networks. The bots are there to make it feel lived in so they can advertise to you. They are running out of people who are willing to give them free content while they make billions off your art. So then, they just replace the artist.

[–] [email protected] 11 points 2 weeks ago (1 children)

you're leaving out the main question: do they increase profit? YES.

so nothing anyone says matters. prepare your anus

load more comments (1 replies)
[–] [email protected] 11 points 2 weeks ago (1 children)

I mean the students around me, that would have failed by now without chatgpt probably DO want it. But they dont actually want the consequences that come with it. The academic world will adapt and adjust, kind of like inflation. You can just print more money, but that wont actually make everyone richer long term.

load more comments (1 replies)
[–] [email protected] 45 points 2 weeks ago (9 children)

Most of the hate is coming from people who don't really know anything about "AI" (LLM) Which makes sense, companies are marketing dumb gimmicks to people who don't need them and, after the novelty wore off, aren't terribly impressed by them.

But LLMs are absolutely going to be transformational in some areas. And in a few years they may very well become useful and usable as daily drivers on your phone etc, it's hard to say for sure. But both the hype and the hate are just kneejerk reactionary nonsense for the moment.

[–] [email protected] 3 points 2 weeks ago

I'm completely over taxed mentally, and I offload so much to it from reconciling bank statements and sorting game mods, to a home brew ongoing multiverse starring my son and which emojis to use in notion at work.

[–] [email protected] 28 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

No, the "hate" is from people trying to raise alarms about the safeguards we need to put in place NOW to protect workers and creators before it's too late, to say nothing of what it will do to the information sphere. We are frustrated by tone deaf responses like this that dismiss it as a passing fad to hate on AI.

OF COURSE it will be transformational. No shit. That's exactly why many people are very justifiably up in arms about it. It's going to change a lot of things, probably everything, irreversibly, and if we don't get ahead of it with regulations and standards, we won't be able to. And the people who will use tools like this to exploit others -- because those people will ALWAYS use new tools to exploit others -- they want that inaction, and love it when they hear people like you saying it's just a kneejerk reaction.

[–] [email protected] -1 points 2 weeks ago (1 children)

At what point in history did we ever halt the deployment of a new technology to protect workers?

[–] [email protected] 14 points 2 weeks ago (1 children)

Never. That's the problem with history. Happy Labor Day.

[–] [email protected] 2 points 2 weeks ago

Or just the problem with technology in general. Every gain is bought with a tradeoff.

Once a man has changed the relationship between himself and his environment, he cannot return to the blissful ignorance he left. Motion, of necessity, involves a change in perspective.

Commissioner Pravin Lal, "A Social History of Planet"

load more comments (3 replies)
[–] [email protected] 1 points 2 weeks ago (2 children)

at the end of the day gpt is powering next generation spam bots and writing garbage text, stable diffusion is making shitty clip art that would otherwise be feeding starving artists….
all the while consuming ridiculous amounts of electricity while humanity is destroying the planet with stuff like power generation….

it’s definitely automating a lot of tedious things… but not transforming anything that drastically yet….

but it will… and when it does, the agi that emerges will kill us all.

[–] [email protected] -2 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

A far more likely end to humanity by an Artificial Superintelligence isn’t that it kills us all, but that it domesticates us into pets.

Since the most obvious business case for AI requires humans to use AI a lot, it’s optimized by RLHF and engagement. A superintelligence created using human feedback like that will almost certainly become the most addictive platform ever created. (Basically think of what social media did to humanity, and then supercharge it.)

In essence, we will become the kitties and AI will be our owners.

[–] [email protected] 12 points 2 weeks ago (2 children)

but that it domesticates us into pets.

So all our needs and wants will be taken care of and we no longer have to work or pay bills?

Welp, I for one welcome our ~~robot~~ AI overlords

[–] [email protected] 2 points 2 weeks ago (1 children)

Yes, I believe that will be the ultimate end of AI. I don’t think billionaires are immune from the same addictions that the rest of us are prone to. An AI that takes over will not answer to wealthy humans, it will domesticate them too.

[–] [email protected] 3 points 2 weeks ago

I don’t think billionaires are immune from the same addictions that the rest of us are prone to.

I'd argue that it is likely that they are more prone to addiction but, their drug of choice is power.

load more comments (1 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 66 points 2 weeks ago (5 children)

Most of the hate is coming from people who don't really know anything about "AI" (LLM)

No.

As an actual subject matter expert, I hate all of this, because assholes are overselling it to people who don't know better.

[–] [email protected] 42 points 2 weeks ago (3 children)

My hatred of AI comes from seeing the double standard between how mass market media companies treat us when we steal from them vs when they steal from us. They want it to be a fully one way street when it comes to law and enforcement. House of Mouse owns all the media they create and that remixes work they create. When we create a new original idea, by the nature of the training model, they want to own that, too.

I also work with these tech bro industry leaders. I know what they're like. When they say to you they want to make it easier for non-artistic people to create art, they're not telling you about an egalitarian and magnificent future. They're telling you about how they want to stop paying the graphic designers and copy editors who work in their company. The vision they have for the future is based on a fundamental misunderstanding about whether or not the future presented in Bladerunner is:

a) Cool and awesome b) Horrifying

They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don't want those people doing art and poetry because they want them to be too busy mining, driving, and shopping. This whole thing. This whole current wave of AI technology, it doesn't benefit you except for fleetingly. LLMs, ethically trained, could, indeed, benefit society at large, but that's now who's developing them. That's not how they're being trained. Their models are intrinsically tainted by the double standard these corporations have because their only goal is to benefit from our labor without benefiting us.

[–] [email protected] 17 points 2 weeks ago

They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don't want those people doing art and poetry because they want them to be too busy mining, driving, and shopping.

That's a great summary of the core issue!

I adore the folks doing cool new things with AI. I am unhappy with the folks deciding what should get funded next in AI.

load more comments (2 replies)
[–] [email protected] 7 points 2 weeks ago

The people being oversold are the people who don't know anything about it. I guess you can hate the people doing the over selling, but don't hate the field. It's one of the most promising areas of computer research being done right now.

load more comments (3 replies)
[–] [email protected] -3 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

There is a lot of blind hate, because it's edgy right now to be against.

This thing already is transformational and we can already see a glimpse where it's going. I think it's normal that we have a bunch of stupid half products right now. People just have to realise ai is under development and new advancements are coming weekly.

Besides, what are we going to do, not develop it? Just abandon the whole technology? That's nonsense.

[–] [email protected] 8 points 2 weeks ago (1 children)

Besides, what are we going to do, not develop it? Just abandon the whole technology? That’s nonsense.

The tech industry will happily abandon it as soon as the next hype train comes along – we’ve already seen it happen with multiple “innovations” – dotcom, subprime, crypto, NFTs …

[–] [email protected] 0 points 2 weeks ago (1 children)

It's not comparable. This is not just something, it's a tech we want, have dreamed about since probably ever.

load more comments (1 replies)
[–] [email protected] 13 points 2 weeks ago (2 children)

Not blind hate. AI will be devastating to the environment for to it's power and water consumption. We need to ask ourselves if the future water wars will be worth the corporate profits.

[–] [email protected] -1 points 2 weeks ago (1 children)

I think people are over rating how much power AI will consume in the long term. Training a model takes way more power than running it, and once we understand the tech better models can be developed for specific applications. It would be like when Edison was first working on the light bulb and extrapolating the power usage of whatever filament he was testing to every household in the world.

Also, it doesn't have to be corporate profits. Individuals can benefit from AI. There's a structural problem with capitalism, not with this technology.

[–] [email protected] 11 points 2 weeks ago* (last edited 2 weeks ago)

Besides, what are we going to do, not develop it? Just abandon the whole technology? That's nonsense.

As someone who knows a substantial amount about how LLM's actually work:

  • I'm delighted that AI companies are developing this technology.
  • I'm annoyed, but not angry, that phone and PC makers are developing this technology. I don't want it, yet. I'll probably appreciate it when they get it right. (I'll wait for the version that ships with Debian, because that's the only OS maker whose AI I would trust not to be deeply invasive to my privacy.)
  • I'm irritated that car companies, real estate investment companies, web browser developers, stock traders, and everyone else who was "all-in" on virtual reality two years ago, is making a lot of noise about developing this technology. They don't hire the necessary talent, and their results are shit. Real investment returns require real investments, which these hype-followers haven't proven capable of.
[–] [email protected] 30 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

AI is absolutely going to be transformative but a lot of the hate right now isn't the technology itself but the way companies are jumping on it and forcing it down the throats of people who don't want it, in a way that worsens their customer experience. Yes, let's force AI into every software product. Yes let's take away the humans you used to talk to and make them all bots instead.

Even from within tech itself there is huge resentment because you've got corps pumping billions into AI while at the same time slashing their workforce to afford those billions, with no clear return in sight.

Tech is treating AI as the next dotcom boom and pumping everything into it, but just like it did then the bubble of investment will burst, and there will be losers as well as winners.

I'm running self-hosted LLMs at home and I'm having huge fun experimenting with their capabilities. I just wish LLMs could have been implemented in the real world with space for ethics and the human factor, not the pure profit chasing bullshit we actually got.

[–] [email protected] 3 points 2 weeks ago

I agree, but I think there is no around this forcing down the throat, slashing people in favour of barely functioning product. Don't get me wrong, I wish it was done the right and fair way, but realistically no one with any power wants it done in a fair way.

[–] [email protected] 13 points 2 weeks ago* (last edited 2 weeks ago)

AI is absolutely going to be transformative but a lot of the hate right now isn't the technology itself but the way companies are jumping on it and forcing it down the throats of people who don't want it, in a way that worsens their customer experience.

Exactly.

We did the same shit with mobile apps in 2009 - there was a mobile app - that no one wanted - being pushed hard - for every imaginable purpose.

I do still use mobile apps.

But I don't have a dedicated mobile app installed for buying socks for my pets.

AI, today, is burdened with the same shit. It'll calm down, after failing to deliver the vast majority of what is currently being promised.

load more comments (4 replies)
load more comments
view more: ‹ prev next ›