196
Community Rules
You must post before you leave
Be nice. Assume others have good intent (within reason).
Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.
Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.
Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".
Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.
Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.
Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.
Avoid AI generated content.
Avoid misinformation.
Avoid incomprehensible posts.
No threats or personal attacks.
No spam.
view the rest of the comments
I've been thinking recently about chain of trust algorithms and decentralized moderation and am considering making a bot that functions a bit like fediseer but designed more for individual users where people can be vouched for by other users. Ideally you end up with a network where trust is generated pseudo automatically based on interactions between users and could have reports be used to gauge whether a post should be removed based on the trust level of the people making the reports vs the person getting reported. It wouldn't necessarily be a perfect system but I feel like there would be a lot of upsides to it, and could hopefully lead to mods/admins only needing to remove the most egregious stuff but anything more borderline could be handled via community consensus. (The main issue is lurkers would get ignored with this, but idk if there's a great way to avoid something like that happening tbh)
My main issue atm is how to do vouching without it being too annoying for people to keep up with. Not every instance enables downvotes, plus upvote/downvote totals in general aren't necessarily reflective of someone's trustworthiness. I'm thinking maybe it can be based on interactions, where replies to posts/comments can be ranked by a sentiment analysis model and then that positive/negative number can be used? I still don't think that's a perfect solution or anything but it would probably be a decent starting point.
If trust decays over time as well then it rewards more active members somewhat, and means that it's a lot harder to build up a bot swarm. If you wanted any significant number of accounts you'd have to have them all posting at around the same time which would be a lot more obvious an activity spike.
Idk, this was a wall of text lol, but it's something I've been considering for a while and whenever this sort of drama pops up it makes me want to work on implementing something.
I'm always wary of how such systems can be gamed and how they'll influence user behavior, but the only downside to trying is your own efforts. Even if you fail miserably, I imagine the exercise itself would improve our understanding of what works, what doesn't, and how to form better approaches in the future. To succeed in making a system which improves user interactions would be a truly wonderful thing, and may even translate to IRL applications. I would urge you to follow through with this for as long as you feel it's something you'd like to do.
Yeah those are basically my thoughts too lol. Even if it ends up not working out the process of trying it will still be good since it'll give me more experience. Those aspects you're wary of are also definitely my 2 biggest concerns too. I think (or at least hope) that with the rules I'm thinking of for how trust is generated it would mostly positively effect behaviour? I'm imagining by "rewarding" trust to recieving positive replies, combined with a small reward for making positive replies in the first place, it would mostly just lead to more positive interactions overall. And I don't think I'd ever want a system like this to punish making a negative reply, only maybe when getting negative replies in response, since hopefully that prevents people wanting to avoid confrontation of harmful content in order to avoid being punished. Honestly it might even be better to only ever reward trust and never retract it except via decay over time, but that's something worth testing I imagine.
And in terms of gaming the system I do think that's kinda my bigger concern tbh. I feel like the most likely negative outcome is something like bots/bad actors finding a way to scam it, or the community turning into an echo chamber where ideas (that aren't harmful) get pushed out, or ends up drifting towards the center and becoming less safe for marginalized people. I do feel like thats part of the reason 196 would be a pretty good community to use a system like this though, since there's already a very strong foundation of super cool people that could be made the initial trusted group, and then it would hopefully lead to a better result.
There are examples of similar sorts of systems that exist, but it's mostly various cryptocurrencies or other P2P systems that use the trust for just verifying that the peers aren't malicious and it's never really been tested for moderation afaik (I could have missed an example of it online, but I'm fairly confident in saying this). I think stuff like the Fediverse and other decentralized or even straight up P2P networks are a good place for this sort of thing to work though, as a lot of the culture is already conducive to decentralization of previously centralized systems, and the communities tend to be smaller which helps it feel more personal and prevents as many bad actors/botting attempts since there aren't a ton of incentives and they become easier to recognize.
Hey wow thats an awesome Idea! I'm currently in training to become a Software developer myself and this sound really impressive!
Did you already started?
I've been looking at the Lemmy api and stuff, and into some existing libraries/implementations of trust networks but that's about it so far tbh. I think I'm gonna start working on some implementation later today maybe, this whole mod drama and the discussion it led to make me really want to start lol.
Nice! If you post progress or so to any programming community @ me :D