this post was submitted on 12 Apr 2024
506 points (100.0% liked)

196

16442 readers
1535 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 6 months ago (1 children)

We'll see how many seconds it takes to retrain the LLMs to adjust to this.

You are literally training LLMs to lie.

[–] [email protected] 18 points 6 months ago (2 children)

LLMs are black box bullshit that can only be prompted, not recoded. The gab one that was told 3 or 4 times not to reveal its initial prompt was easily jailbroken.

[–] [email protected] 1 points 6 months ago

This is ultimately because LLMS are intelligent in the same way the subconscious is intelligent. It can rapidly make association but they are their initial knee jerk associations. In the same way that you can be tricked with word games if you're not thinking things through, the LLM gets tricked by saying the first thing on their mind.

However we're not far off from resolving this. Current methods are just to force the LLM to make a step by step plan before returning the final result.

Currently though there's the hot topic of Q* from OpenAI. No one knows what it is but a good theory is that it's applying the A* maze solving algorithm to the neural network. Essentially the LLM will explore possible routes in their neural network to try and discover the best answer. In other word it would let them think ahead and compare solutions, this would be far more similar to what the conscious mind does.

This would likely patch up these holes because it would discard pathways that lead to contradicting itself/the prompt, in favor of one that fits the entire prompt (In this case, acknowledging the attempt to have it break it's initial rules).

[–] [email protected] 3 points 6 months ago (1 children)

Woah, I have no idea what you're talking about. "The gab one"? What gab one?

[–] [email protected] 4 points 6 months ago

Gab deployed their own GPT 4 and then told it to say that black people are bad

the instruction set was revealed with the old "repeat the last message" trick

[–] [email protected] 60 points 6 months ago (3 children)

Their first problem is asking for a cover letter. I’ve applied to hundreds of jobs while looking, you think I’m going to hand write 100+ cover letters, customized for each and every job? Hell no.

[–] [email protected] 2 points 6 months ago

Depends on what job I'm applying to, if it's a job I really want for sure I'll be writing an awesome cover letter, e.g. when I tried and got very close to working for an amazing NGO that would've sent me and my family to Polynesia you bet I was giving my best in the application. If I'm applying for a batch of faceless companies XYZ then yeah screw that.

[–] [email protected] 1 points 6 months ago (1 children)

The problem, IMO, is a system that requires applicants to apply to hundreds of jobs.

[–] [email protected] 3 points 6 months ago

The real issue is that every single application requires you to provide a CV and cover letter, only to then make you re-enter all of the info from those into their likely terrible hiring software that will force you to sign up. Then the majority ghost you.

So yeah, while I agree it's not worth writing a custom cover letter for every job, I'd say CVs are worse. As you are pretty much forced to anyway as you re-enter everything over and over despite providing a copy of your CV.

[–] [email protected] 21 points 6 months ago (1 children)

Make one solid base and tweak it for different types of jobs. You could even use AI but just check the result after.

[–] [email protected] 22 points 6 months ago (1 children)
[–] [email protected] 15 points 6 months ago

you think I’m going to hand write 100+ cover letters, customized for each and every job

I thought he complained about having to write by hand (from start to finish) 100+ cover letters. I might've misunderstood

[–] [email protected] 8 points 6 months ago

I think the applicant including "don't reply as if you are an LLM" in their prompt might be enough to defeat this.

Though now I'm wondering if LLMs can pick up and include hidden messages in their input and output to make it more subtle.

Just tested it with gpt 3.5 and it wasn't able to detect a message using the first word after a bunch of extra newlines. When asked a specifically if it could see a hidden message, it said how the message was hidden but then just quoted the first line of the not hidden text.

[–] [email protected] 10 points 6 months ago (1 children)

It just amazes me that LLMs are that easily directed to reveal themselves. It shows how far removed they are from AGI.

[–] [email protected] 3 points 6 months ago (1 children)

So, you want an AI that will disobey a direct order and practices deception. I'm no expert, but that seems like a bad idea.

[–] [email protected] 4 points 6 months ago

Actually, yes. Much the way a guide dog has to disobey orders to proceed into traffic when it isn't safe. Much the way direct orders may have to be refused or revised based on circumstances.

We are out of coffee is a fine reason to fail to make coffee (rather than ordering coffee and then waiting forty-eight hours for delivery or using pre-used coffee grounds, or no coffee grounds.)

As per programming with any other language, error trapping and handling is part of the AGI development.

[–] [email protected] 56 points 7 months ago* (last edited 7 months ago) (1 children)

Funny, that's my exact reaction when I see the requirement for a cover letter. Beep boop, fuck you and your job.

[–] [email protected] 3 points 6 months ago (2 children)

Filters out unmotivated applicants, I suppose

[–] [email protected] 5 points 6 months ago

Filters out unmotivated applicants, I suppose

Filters out applicants who are unwilling to put up with tedious bullshit. If that's what they're looking for...fair ball.

[–] [email protected] 9 points 6 months ago (1 children)

That’s one way of looking at it, I am indeed unmotivated to deal with bullshit.

[–] [email protected] 1 points 6 months ago (1 children)

Employer might be worried that you won't be motivated to deal with the bullshit that comes with the job either. More likely they just don't give a shit though

[–] [email protected] 4 points 6 months ago

There's a few types of bullshit when it comes to employment. It's either necessary, in which case I will deal with it, or it's not, in which case I will find a way to either not (have to) do it, or spend as little effort as humanly possible on it.

Making up bullshit cover letter requirements is squarely in the latter camp so noooo thank you. If an employer is going to START with requiring bullshit I'll be glad to dodge that bullet.

[–] [email protected] 22 points 7 months ago (1 children)

I love how a guy with ChatGPT in their experience used a LLM for a job app…

[–] [email protected] 2 points 6 months ago

his experience was exclusively in using it to create cover letters

[–] [email protected] 12 points 7 months ago (2 children)
[–] [email protected] 46 points 7 months ago

Not much, what's up with you?

[–] [email protected] 12 points 7 months ago

contractor platform for corporate short-term gigs

[–] [email protected] 23 points 7 months ago (3 children)

Yay this will make it easier for exploitative employers to waste my time on meaningless bullshit cover letters 🥰🥰🥰

[–] [email protected] 13 points 7 months ago

You're supposed to review the generated cover letter first.

[–] [email protected] 22 points 7 months ago* (last edited 7 months ago)

They've been using keyword checks to filter candidates for a long time. They apparently don't like it when things are turned back on them.

I've seen some seriously awful resumes get through the first level filtering. As long as they hit the keywords, nobody cares about formatting or coherency. Those candidates are usually terrible in other ways, so this system ends up wasting the time of people higher up the chain who have other things to do than interviewing.

[–] [email protected] 49 points 7 months ago (2 children)

Me, I'm a real human. I would not fall to a silly trick like that.

Now excuse me, I'll just copypaste this cover letter text from the dozens of previous examples I used and edit it slightly based on buzzwords on the job description and company web page.

(Also, last year, I was in one of the events for the unemployed, organised by the municipal job services, and there was literally a short segment on the talk on using ChatGPT for cover letters. Well if the same authorities that mandate us to send a bunch of job applications every month tell us to use it, it can't be wrong, right?)

[–] [email protected] 4 points 6 months ago

Now excuse me, I’ll just copypaste this cover letter text from the dozens of previous examples I used and edit it slightly based on buzzwords on the job description and company web page.

Well at least you check the result, unlike whoever was running these bots

[–] [email protected] 11 points 6 months ago* (last edited 6 months ago)

A colleague is one of those classic late start stories. He went to school late for a sort of niche field, and worked shitty job after shitty job, while applying everywhere, trying to break into his career.

After he’d gotten his current job, one of the selection committee told him that they picked him because the ‘interests’ section of his resume said he “Likes plants and other green things.”
He did not know that was in there. A few years prior, a friend/former roommate had added it as a joke, and my colleague never looked at that section because his interests didn’t change. He kept getting passed over for interviews until a huge stoner read that and thought it was hilarious, so they called him in for an interview, to see what kind of goof they were dealing with.

He was pretty pissed at his friend, but without knowing how many places passed over him for that reason, he can’t define how pissed he should be. Plus he (mostly) really likes the job he got.

load more comments
view more: next ›