make it blockable, mutable.., i don't like opening a post and the first thing i see is a pinned bot comment.
Open Source
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon from opensource.org, but we are not affiliated with them.
Just block it and you'll no longer see any posts or comments of it.
Have you checked out awesome lemmy? you might want to improve an existing project.
None of the mod bots seem to be written in Rust, which I'll be using. So nope.
Half the features are helpful and the others are obnoxious or useless reddit vestiges. Auto banning users, locking communities, deleting posts is all rather harmful and not conducive to interesting discussion and posts. Welcome messages and auto mod comments on every post are also plain terrible.
Make a slim bot with moderation tools that helps mods and admins to do their tasks more efficiently and comfortably, but dont offload the mod role itself to the bot. That is one of the worst parts if reddit.
Your idea of a "bot" is yet just manual labor for mods with "advanced" features.
Honestly a bot moderator is just open source enshittification of the fediverse if you did it like this. Bots have no nuance, do not understand context and are generally unable to apply reason to a situation.
The most egregious suggestion is user name based bans, this is 100% going to remove a bunch of users without real cause. Or having automod comment the same irrelevant headline on every single post is just causing spam and kills the comment count function.
In my opinion the bots should do all the tediousness for the moderators, and there may even be scenarios where a bot content filter could be invaluable, but in general any tool you put out there will also be used to its fullest extent by at least one person.
Like cops with too many powers, eventually they abuse it for everything.
What exactly am I to do when mods use my bot maliciously? I just try to program features into the bot that might be used by the moderators. Everything is optional, if the features are used maliciously, it's not my fault but the moderators'. What fault does the creator of knives have when they are being used to murder people instead of cutting vegetables?
I do appreciate your comment though, some misc tasks don't necessarily need to exist such as welcome messages and auto comments on posts. Actually I'll remove welcome messages, they are a waste of API calls. Maybe auto comment on posts as well but scheduled posts are (most likely) staying. It is a moderation bot after all. I'll consider your compliant. Thanks.
You mean as if the bot was a helper and not the admin itself?
Madness!
The bot won't have any admin permissions.
Generally speaking please don't. I've never seen a reddit bot that I didn't find annoying.
To each their own, but I found plenty of useful or entertaining bots on Reddit. If you hate bots that much, there is a toggle in your Lemmy settings to block all labelled bot accounts.
I have them blocked already, but that doesn't stop the moderation bots screwing up and deleting good posts, whether mine or other people's. It's unfortuanate to not get informed when the words in someone's post happen to be in alphabetical order though.
This is a moderation bot.
And?
The bot isn't for your convenience but the moderators, obviously.
please no welcome messages, they're like the most obnoxious thing Reddit ever had (well ok maybe not the most), they just clog your inbox.
I agree that welcome messages are often just clutter, but I don't think that this means the feature should not be included. For some communities, a welcome message is appropriate. Moderators don't need to use every feature for a given community.
I'll consider it. Thanks for your comment.
I don't see a problem with having the feature as an option. It only becomes a problem if it is misused by moderators.
The problem with something like this is that people start to dislike it more with experience. People have to be less experienced to become more experienced, and so it's a certainty that there will be a lot of moderators that misuse it.
I also don't mean to sound like a gnome dev, but what is actually the use case for this?
I beg of you, please don’t. The worst thing to happen to Reddit was their Automod. Please reconsider.
Trying to automate things and decrease mod burden is great, so I don't oppose OP's idea on general grounds. My issues are with two specific points:
- Punish content authors or take action on content via word blacklist/regex
- Ban members of communities by their usernames/bios via word blacklist or regex
- Automated systems don't understand what people say within a context. As such, it's unjust and abusive to use them to punish people based on what they say.
- This sort of automated system is extra easy to circumvent for malicious actors, specially since they need to be tuned in a way that lowers the amount of false positives (unjust bans) and this leads to a higher amount of false negatives (crap going past the radar).
- Something that I've seen over and over in Reddit, that mods here will likely do in a similar way, is to shift the blame to automod. "NOOOO, I'm not unjust. I didn't ban you incorrectly! It was automod lol lmao"
Instead of those two I think that a better use of regex would be an automated reporting system, bringing potentially problematic users/pieces of content to the attention of human mods.
Alright. Sounds fair. Instead of taking dangerous actions, I'll make it create a report instead. Though I'll probably keep the feature to punish members by their usernames via regex or word blacklist.
Though I'll probably keep the feature to punish members by their usernames via regex or word blacklist.
This right here is the attitude that I have a problem with. I can think of one user who would get blacklisted right away because of their username alone. And that does not sit right with me.
Alright. Sounds fair. Instead of taking dangerous actions, I’ll make it create a report instead.
Thank you! Frankly, if done this way I'd be excited to use it ASAP.
Why? Automod is just a tool, the issues people have with it is how overzealous the mods using it are. If you're moderating a community with 10,000+ people you can't expect to filter and manage everything yourself, so a bot scheduling posts and filtering potential spam/low effort content is necessary.
Automod is just a tool, indeed, but how a tool is designed dictates or at least encourages its usage.
Exactly.
It's to easen the work of community moderators. And you can't just catch every comment that needs to be removed. Or posts, etc. This is where an automated moderation bot comes in. No matter how much you hate it, it is a must to have some automated system in growing platforms such as Lemmy.
It's also not like the bot instantly bans everyone. I honestly don't get the hate
Banning members on their username. Locking down an entire community because of a small group of people spamming. Deleting posts because an account isn’t old enough?
Why not throw in the system to have to approve posts before they get published? Really make the community welcoming.
It was said in another comment above that this tool is easily abused by “overzealous mods”, but I believe the real problem are overzealous programmers.
Reddit failed for reasons, and I believe automod was one of them. But you’ll do you, and nothing I say can change that.
Banning members on their username.
I am merely trying to give community mods options. This feature and the other features are optional. Direct your complaints to the community owners if they use some regex that matches usernames that you think shouldn't be banned.
Locking down an entire community because of a small group of people spamming.
The bot just locks it down to stop the spam, otherwise everyone's feed will just be filled with spam. I haven't seen such a spam yet, but that does not mean there won't be any in the future. Just trying to be prepared for it.
Deleting posts because an account isn’t old enough?
Again, I am just giving the mods options. If they enable the feature and use it, direct your complaints to them.
Why not throw in the system to have to approve posts before they get published? Really make the community welcoming.
That is possible with post locking and with a dashboard. I'll look into it.
It was said in another comment above that this tool is easily abused by “overzealous mods”, but I believe the real problem are overzealous programmers.
Again, I'm only giving them options.
Every tool can be used both in good and bad purposes. Why is it that it is the fault of the tool or its creator?
OP I agree with you, it's a great idea imo.
I've been a moderator before on a Discord server with +1000 members, for one of my FOSS projects,
and maintenance against scam / spam bots grew so bad,
that I had to get a team of moderators + an auto moderation bot + wrote an additional moderation bot myself!..
Here is the source to that bot, might be usable for inspiration or just plain usable some other users:
https://github.com/Rikj000/Discord-Auto-Ban
I think it will only be a matter of time before the spam / scam bots catch up to Lemmy,
so it's good to be ahead of the curve with auto-moderation.
However I also partially agree with @dohpaz42, auto-moderation on Reddit is very, uhm, present.
Imo auto moderation should not really be visible to non-offenders.
I don’t moderate any Lemmy communities, but generally I like having a strike system so that not everything gets you band. For example using a not allowed word (swearword, nsfw, etc) deletes the post and adds a strike to the user, automated message with number of strikes to the user, and after repeated actions a ban.
Sounds good. Added. A certain strike threshold will temporarily ban the user within a specified time period.
I would add a way to send an automated alert to mods if a user gets repeated temporary bans (kind of like a super-strike), so human mods can decide if a permanent ban is warranted or if they need to review how zealous the automod is being.
I'll think about it. But I'll most likely add a option to permanently ban a user after X temp bans instead. The thing with sending alerts to mods is that the API calls increases as the moderator count increases. I'd like to decrease the amount of API calls made since there certainly is a rate limit on the instance I'll be using for the bot.