this post was submitted on 19 Jul 2023
0 points (NaN% liked)

Comradeship // Freechat

2168 readers
25 users here now

Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.

A community for comrades to chat and talk about whatever doesn't fit other communities

founded 3 years ago
MODERATORS
 

The whole article is quite funny, especially the lists of most used tankie words, or the branding of foreignpolicy as a left-wing news source.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (2 children)

Me reading this:

It sure is lovely that the "AI" "Revolution" has given hacks a bunch of hard to audit but scientific-sounding metrics for them to apply however they want.

Armchair peer review time: I'd love to see them introducing a control group for their "toxicity" model by including subs from their other identified clusters. How can you know what it means for tankies to be millions of billions toxic if you don't have baselines? I do like how they agree with r/liberal and r/conservative being in the same cluster though. On the domain analysis I'd require them to also include the total number of articles and not just the percentages, which I'd bet would give a fun graph.

Overall, I've read less funny and more informative parody papers. For the AI nerds, this one might be fun.

[–] [email protected] 0 points 1 year ago (1 children)

It sure is lovely that the “AI” “Revolution” has given hacks a bunch of hard to audit but scientific-sounding metrics for them to apply however they want.

I'm slogging through it right now and coming to similar assessments. "With enough Machine Learning shenanigans, I can arrive at whatever conclusion I want!"

[–] [email protected] 0 points 1 year ago

They're tired of gaslighting people into becoming liberals, now they're doing it with machines. Whoever thought of letting misinformation giants like Google "teach" "AI" should be fired ~~at~~.

[–] [email protected] 0 points 1 year ago

🤣🤣🤣 I'm in tears. Actual tears.

I’d love to see them introducing a control group for their “toxicity” model by including subs from their other identified clusters. How can you know what it means for tankies to be millions of billions toxic if you don’t have baselines?

Ironically(?) the funding is to develop a machine learning algorithm not to spot and moderate racism but to spot and moderate the least racist of any two examples. Which means, the project is to develop a comparative model but they haven't thought about using comparison within the research itself. Meanwhile, real scholars get fired from all over the place for being in unions and demanding a living wage.