this post was submitted on 06 Jul 2025
64 points (67.4% liked)

Technology

72455 readers
2284 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 42 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 3 hours ago

Bias was baked in via RLHF and also existed in the datasets used for training. Reddit cancer grows

[–] [email protected] 2 points 4 hours ago

Would be cool if the Technology community found literally any other topic to discuss beyond AI. I’m really over it, and I don’t care.

[–] [email protected] 6 points 14 hours ago

Even before LLMs, resumes were processed through keyword filters already. You have to optimize your resume for keyword readers, which should work for LLMs as well.

I use the ARCI model to describe my roles.

[–] [email protected] 4 points 19 hours ago

They're as biased as the data they were trained on. If that data leaned toward male applicants, then yeah, it makes complete sense.

[–] [email protected] 28 points 21 hours ago (2 children)

Seems like a normal, sane and totally not-biased source

[–] [email protected] 8 points 17 hours ago

What the fuck did I just read?

[–] [email protected] 8 points 22 hours ago (1 children)

I don't care what bias they do and don't have ; if you use an LLM to select résumés, you don't deserve to hire me. I make my résumé illegible for LLMs on purpose.

( But don't follow my advice. I don't actually need a job so I can pull this kinda nonsense and be selective, most people probably can't )

[–] [email protected] 1 points 14 hours ago (1 children)

How do you make it illegible for LLMs?

[–] [email protected] 2 points 13 hours ago

You write a creative series of deeply offensive curse words in small white on white print.

[–] [email protected] 2 points 22 hours ago* (last edited 22 hours ago)

Only half kidding now... the way morality and ethics get extrapolated now by the perfection police, this must mean anti-AI = misogynist.

[–] [email protected] 4 points 1 day ago (1 children)

So we can use Trump's own anti-DEI bullshit to kill off LLMs now?

[–] [email protected] 1 points 22 hours ago

Well, ya see, trump isnt racist against computers

[–] [email protected] 63 points 1 day ago* (last edited 1 day ago) (1 children)

I dunno why people even care about this bullshit pseudo-science. The study is dumb AF. The dude didn't even use real resumes. He had an LLM generate TEN fake resumes and then the "result" is still within any reasonable margin of error. Reading this article is like watching a clown show.

It's all phony smoke and mirrors. Clickbait. The usual "AI" grift.

[–] [email protected] 10 points 1 day ago

I feel as though generating these "fake" resumes is one of the top uses for LLMs. Millions of people are probably using LLMs to write their own resumes, so generating random ones seems on par with reality.

[–] [email protected] 7 points 1 day ago

these systems cannot run a lemonade stand without shitting their balls

[–] [email protected] 17 points 1 day ago

and their companies are biased against humans in hiring.

[–] [email protected] 72 points 1 day ago* (last edited 1 day ago) (1 children)

LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That's what they're meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.

I mean the issue aren't women or anything, it's using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.

[–] [email protected] 18 points 1 day ago (1 children)

Just pattern recognition in the end, and extrapolating from that sample size.

[–] [email protected] 7 points 1 day ago

Issue is they probably want to pattern-recognize something like merit / ability / competence here. And ignore other factors. Which is just hard to do.