this post was submitted on 19 Jul 2023
0 points (NaN% liked)
Comradeship // Freechat
2168 readers
25 users here now
Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.
A community for comrades to chat and talk about whatever doesn't fit other communities
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In other words, the authors have no idea what they're talking about. We're abstracting to the level of classes, not states. Maybe they focused on the intellectually deprived western Marxist discourse.
Really did feel like a deliberate missing of the whole point, didn't it? Felt like a lot of their data stripped out any context that the posts they were 'analyzing' had; and drew deliberately-misleading takes from their sanitized data. Like, do you know any marxists who make a habit of attacking muslims? Meanwhile, most of the takedowns I see of Amerikans squawking about 'muh chingchang'(i've deadass heard a white person pronounce it that way; imagine these crackers actually learning how to pronounce 'Xinjiang') winds up boiling down to "Oh yeah, 'cause the country that spent fifteen years murdering muslims wholesale in the middle-east, and leaving depleted uranium in the sand to mutate their babies really cares about the Muslims in Xinjiang allasudden."
Yeah, any paper worth its weight in flour would at the very least have an appendix with illustrative examples of the comments they find interesting or have "high toxicity". By just talking about all content in abstract with random asspull metrics they get to claim objectivity while presenting zero actual information. Typical for the kind of people who like to reduce countries to their GDP (per capita if you're lucky).
Edit: I didn't notice that they actually did include some in their appendix after 5 pages with 135 citations. So much bloat and there's even a couple Washington Post articles there. They definitely didn't even read a lot of those beyond the abstract. Either way the examples are just strewn around in the text and did not include their "toxicity level" so the point still stands. Actually worst "qualitative analysis" I've ever read tbh, and that's usually already the worst in data science. More like "pseudoscientific cherrypicking".
"If you look here at figure X you can see a selection of the most frequent vocabulary. In figure Y you can see several possibilities of our own design that show the arrangement of the words in figure X into some rather mean and hurtful sentences. Disgraceful. Coincidentally when we sent this paper for peer review both reviewer 1 and reviewer 2 had come to similar conclusions and used a mixture of the words in our paper, the vocabulary database, and some choice additions to say, in similarly mean and hurtful tones, that our work was shit. We can't work out what it means right now but we're going to run the reviews through our system for a meta analysis before concluding that the reviewers we're Lemmygrad users."