this post was submitted on 10 Apr 2024
1302 points (99.0% liked)

Programmer Humor

19896 readers
882 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 113 points 9 months ago (1 children)

Damn it, all those stupid hacking scenes in CSI and stuff are going to be accurate soon

load more comments (1 replies)
[–] [email protected] 36 points 9 months ago

Daang and it's a very nice avatar.

[–] [email protected] 44 points 9 months ago* (last edited 9 months ago) (15 children)

The problem was “could you.” Tell it to do it as if giving a command and it should typically comply.

load more comments (15 replies)
[–] [email protected] 181 points 9 months ago (33 children)

What I think is amazing about LLMs is that they are smart enough to be tricked. You can't talk your way around a password prompt. You either know the password or you don't.

But LLMs have enough of something intelligence-like that a moderately clever human can talk them into doing pretty much anything.

That's a wild advancement in artificial intelligence. Something that a human can trick, with nothing more than natural language!

Now... Whether you ought to hand control of your platform over to a mathematical average of internet dialog... That's another question.

[–] [email protected] 53 points 9 months ago (5 children)

There's a game called Suck Up that is basically that, you play as a vampire that needs to trick AI-powered NPCs into inviting you inside their house.

load more comments (5 replies)
[–] [email protected] 34 points 9 months ago (1 children)

I was amazed by the intelligence of an LLM, when I asked how many times do you need to flip a coin to be sure it has both heads and tails. Answer: 2. If the first toss is e.g. heads, then the 2nd will be tails.

[–] [email protected] 31 points 9 months ago

You only need to flip it one time. Assuming it is laying flat on the table, flip it over, bam.

[–] [email protected] 20 points 9 months ago

You could trick it with the natural language, as well as you could trick the password form with a simple sql injection.

[–] [email protected] 93 points 9 months ago (5 children)

I don't want to spam this link but seriously watch this 3blue1brown video on how text transformers work. You're right on that last part, but its a far fetch from an intelligence. Just a very intelligent use of statistical methods. But its precisely that reason that reason it can be "convinced", because parameters restraining its output have to be weighed into the model, so its just a statistic that will fail.

Im not intending to downplay the significance of GPTs, but we need to baseline the hype around them before we can discuss where AI goes next, and what it can mean for people. Also far before we use it for any secure services, because we've already seen what can happen

[–] [email protected] -2 points 9 months ago (1 children)

The problem is that majority of human population is dumber than GPT.

load more comments (1 replies)
[–] [email protected] 8 points 9 months ago* (last edited 9 months ago) (5 children)

but its a far fetch from an intelligence. Just a very intelligent use of statistical methods.

Did you know there is no rigorous scientific definition of intelligence?

Edit. facts

[–] [email protected] 15 points 9 months ago

We do not have a rigorous model of the brain, yet we have designed LLMs. Experts of decades in ML recognize that there is no intelligence happening here, because yes, we don't understand intelligence, certainly not enough to build one.

If we want to take from definitions, here is Merriam Webster

(1)

: the ability to learn or understand or to deal with new or trying >situations : reason

also : the skilled use of reason

(2)

: the ability to apply knowledge to manipulate one's >environment or to think abstractly as measured by objective >criteria (such as tests)

The context stack is the closest thing we have to being able to retain and apply old info to newer context, the rest is in the name. Generative Pre-Trained language models, their given output is baked by a statiscial model finding similar text, also coined Stocastic parrots by some ML researchers, I find it to be a more fitting name. There's also no doubt of their potential (and already practiced) utility, but a long shot of being able to be considered a person by law.

load more comments (3 replies)
[–] [email protected] 37 points 9 months ago

Oh, for sure. I focused on ML in college. My first job was actually coding self-driving vehicles for open-pit copper mining operations! (I taught gigantic earth tillers to execute 3-point turns.)

I'm not in that space anymore, but I do get how LLMs work. Philosophically, I'm inclined to believe that the statistical model encoded in an LLM does model a sort of intelligence. Certainly not consciousness - LLMs don't have any mechanism I'd accept as agency or any sort of internal "mind" state. But I also think that the common description of "supercharged autocorrect" is overreductive. Useful as rhetorical counter to the hype cycle, but just as misleading in its own way.

I've been playing with chatbots of varying complexity since the 1990s. LLMs are frankly a quantum leap forward. Even GPT-2 was pretty much useless compared to modern models.

All that said... All these models are trained on the best - but mostly worst - data the world has to offer... And if you average a handful of textbooks with an internet-full of self-confident blowhards (like me) - it's not too surprising that today's LLMs are all... kinda mid compared to an actual human.

But if you compare the performance of an LLM to the state of the art in natural language comprehension and response... It's not even close. Going from a suite of single-focus programs, each using keyword recognition and word stem-based parsing to guess what the user wants (Try asking Alexa to "Play 'Records' by Weezer" sometime - it can't because of the keyword collision), to a single program that can respond intelligibly to pretty much any statement, with a limited - but nonzero - chance of getting things right...

This tech is raw and not really production ready, but I'm using a few LLMs in different contexts as assistants... And they work great.

Even though LLMs are not a good replacement for actual human skill - they're fucking awesome. 😅

load more comments (2 replies)
[–] [email protected] 13 points 9 months ago (1 children)

They're not "smart enough to be tricked" lolololol. They're too complicated to have precise guidelines. If something as simple and stupid as this can't be prevented by the world's leading experts idk. Maybe this whole idea was thrown together too quickly and it should be rebuilt from the ground up. we shouldn't be trusting computer programs that handle sensitive stuff if experts are still only kinda guessing how it works.

[–] [email protected] 1 points 9 months ago (7 children)

Have you considered that one property of actual, real-life human intelligence is being "too complicated to have precise guidelines"?

[–] [email protected] 1 points 9 months ago

Absolutely fascinating point you make there!

[–] [email protected] 0 points 9 months ago (3 children)

Not even close to similar. We can create rules and a human can understand if they are breaking them or not, and decide if they want to or not. The LLMs are given rules but they can be tricked into not considering them. They aren't thinking about it and deciding it's the right thing to do.

[–] [email protected] 4 points 9 months ago

Have you heard of social engineering and phishing? I consider those to be analogous to uploading new rules for ChatGPT, but since humans are still smarter, phishing and social engineering seems more advanced.

[–] [email protected] 2 points 9 months ago

We can create rules and a human can understand if they are breaking them or not...

So I take it you are not a lawyer, nor any sort of compliance specialist?

They aren't thinking about it and deciding it's the right thing to do.

That's almost certainly true; and I'm not trying to insinuate that AI is anywhere near true human-level intelligence yet. But it's certainly got some surprisingly similar behaviors.

load more comments (5 replies)
load more comments (28 replies)
load more comments
view more: ‹ prev next ›