In case you're wondering what's considered "violent content" on Reddit and how far they'll push this definition, calling the top 10 breakdown of healthcare CEOs "Luigi's List" counts.
Fediverse memes
Memes about the Fediverse
- Be respectful
- Post on topic
- No bigotry or hate speech
Other relevant communities:
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
I left Reddit because they killed 3rd party apps. Since that day, they've only gone on to butcher the platform even further.
Joining Lemmy was a damn good decision.
damn good indeed! plus with a proper app for mastodon and pixelfed we've got it all!
What is a tankie?
You're about to get 10 different answers, most would say
-
Any user from the Lemmygrad and Hexbear instances, and some from Lemmy.ml
-
Genocide deniers
-
Pro-Russia and pro-China users
I think it's just a derogatory term for anyone that's anti-imperialism/liberalism
Got it, thank you!
Reddit will now issue warnings to users who “upvote several pieces of content banned for violating our policies” within “a certain timeframe,” starting first with violent content, the company announced on Wednesday.
“This will have no impact on the vast majority of users as most already downvote or report abusive content,” a Reddit employee says in the announcement post. In comments on the post, a user expressed concern that the new policy could make people “paranoid about voting,” but the employee says that “this would be an unacceptable side effect, which is why we want to monitor this closely and ramp it up thoughtfully.”
If it violates policies, remove it and move on. This is weird.
“We have done this in the past for quarantined communities and found that it did help to reduce exposure to bad content, so we are experimenting with this sitewide,” according to the main post. Reddit “may consider” expanding the warnings in the future to cover repeated upvotes of other kinds of actions as well as taking other types of actions in addition to warnings.
Now that there is Thoughtcrime territory.
“Hey investors look away from the huge pile of porn we’re hosting and look at these cool content filters we’re adding”.
This reminds me lots of when Facebook started applying warnings/bans/etc retroactively to content without any context. I remember getting several wrist slaps in the same month for content I had shared a decade prior that really wasn't all that bad. But Facebook decided it was a problem and made me question what I was allowed to post in the future. It didn't take long after that for me to stop using Facebook completely.
The exact wording seems to be "banned content", which includes a lot more than just violence (is violence banned in the first place, considering subs like r/PublicFreakout?).
Yeah tbh if it were just violent content I'd be fine with that. I still don't trust Reddit as a platform anymore, but I'd be 100% understanding of the rule.
Banned/Removed-by-Admin content, though? That's absurd. That's way too broad.
"Violent content" could also be videos of officer misconduct, and there's the whole can of worms of what "promoting violence" means.
Fair
I think it's advocating violence, and extreme violence (things like terrorist beheading videos and such) that are banned. Things like Waffle House fights likely won't meet the banning threshold.
I assume this is prompted by Luigi-inspired death threats to CEOs. If they are monitoring this so closely, I bet the FBI might want to take a look at those records too...
I think it very well could, as well as any other reddit corporate agenda such as removing content at request of pieces of shit who punch and berate service staff and then later sue to remove said content from the internet. It's a necessary distinction.