this post was submitted on 16 Oct 2024
63 points (97.0% liked)

Fediverse

28380 readers
1032 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 month ago

There was already a wave of bots identified iirc. They were identified only because:

1 the bots had random letters for usernames

2 the bots did nothing but downvote, instantly downvoting every post by specific people who held specific opinions

Turned into a flamware, by the time I learned about it I think the mods had deleted a lot of the discussion. But, like the big tech platforms, the plan for bots likely is going to be "oh crap, we have no idea how to solve this issue." I don't intend to did the admins, bots are just a pain in the ass to stop.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

As far as I'm aware, there are no ways implemented. Got no idea because I'm not smart enough for this type of thing. The only solution I could think of is to implement a paywall (I know, disgusting) to raise the barrier to entry to try and keep bots out. That, and I don't know if it's currently possible, but making it so only people on your instance can comment, vote, and report posts on an instance.

I personally feel that depending on the price of joining, that could slightly lessen the bot problem for that specific instance since getting banned means you wasted money instead of just time. Though, it might also alienate it from growing as well.

[–] [email protected] 1 points 1 month ago

We are already invaded by bots, look at this https://beehaw.org/c/[email protected]

[–] [email protected] 2 points 1 month ago (1 children)

For commercial services like Twitter or Reddit the bots make sense because it lets the platforms have inflated "user" numbers while also more random nonsense to sell ads against.

But for the fediverse, the goals would be, post random stuff into the void and profit?? Like I guess you could long game some users into a product that they only research on the fediverse, but seems more cost effective for the botnets to attack the commercial networks first.

[–] [email protected] 7 points 1 month ago (1 children)

There is a lot to be gained by politically astroturfing, and that is already widespread in the fediverse

[–] [email protected] 1 points 3 weeks ago (1 children)

Has someone posted an argument, or do you in the future see yourself seeing an argument with someone on here taking the side of "alternative facts" and letting that change your mind? If not then it's just someone likely downvoted to the bottom that people will ignore anyways, not worth the time to post it. I think something like Facebook works for these types of things better, as the population is generally older and more likely to see and reshare just any nonsense true or not.

Because I personally don't see the fediverse as a great medium for trying to bring people into the cult, and the ability to bring people out of the cult is even less likely online, fediverse or not.

[–] [email protected] 1 points 3 weeks ago

In general I believe all online communities are toxic when it comes to political discussion and just enable cult behavior. I think both Facebook and the fediverse have the ability to sway options, but at different capacities.

Facebook is simple and easy to use, and because of that it’s widely adopted and you make connections through people you (kinda sorta) know irl. This leads to a false sense of security and can poison your bubble of connections.

With vote manipulation, whitewash communities, brigading, bots, and general anonymity, the fediverse is not any better equipped to deal with “alternative facts.” It being more niche and less user friendly weeds out some people, and you are left with a user base that has slightly more education and decision making skills…but no one is completely immune to manipulative tactics. Bots with agendas are not always easy to identify and continue getting more refined. It’s easy to lose track of the push & pull if you are chronically online..which many fediverse users are.

I don’t have any solutions other than attempting to educate on how to spot misinformation and approach ideas critically. Even with doing that it’s far for 100% and that number keeps declining with age.

[–] [email protected] 14 points 1 month ago* (last edited 1 month ago) (2 children)

Lemmy has no capability to handle non-advanced bots from yesteryear.

It's most definitely not capable of handing bots today and is absolutely unprepared for handling bots tomorrow.

The fediverse is honestly just pending the necessary popularity in order to be turned into bot slop with no controls.

[–] [email protected] 1 points 4 weeks ago (1 children)

The Fediverse has the advantage of being able to control its size. If 10 million people join lemmy tomorrow and most of them go to lemmy.World and then lemmy.World users start causing trouble then that instance gets defederated.

Other than that we only have human moderation which can be overwhelmed.

[–] [email protected] 2 points 3 weeks ago

We also have auto moderators. The recent spam wave didn't occur on my instance at all. But my Matrix notification channel sure did explode with messages of bots being banned.

[–] [email protected] 2 points 1 month ago (3 children)

Does lemmy and other fediverse stuff currently have such a huge bot problem?

[–] [email protected] 2 points 3 weeks ago

Yes. But at least with the admin group I'm part of, it's dealt with fairly quickly, because we employ automated tools to help fight the spam.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

You can't really tell.

It's also a very very VERY small platform compared to other social media platforms like Reddit. (I had another comment where I calculated this but it's ridiculously small)

It is unlikely that it would see anywhere near the same level of dedicated bot activity due to the low return on invested effort.

This is a problem that will become greater once the value of astroturfing and shifting opinion on Lemmy is high enough.

[–] [email protected] 4 points 1 month ago

Hard to say. That’s the problem.

A detectable bot problem is a solvable bot problem.

[–] [email protected] 3 points 1 month ago

"We should join them. It would be wise, Gandalf. There is hope that way."

[–] [email protected] 6 points 1 month ago (3 children)

To manage advanced bots, platforms like Lemmy should:

  • Verification: Implement robust account verification and clearly label bot accounts.
  • Behavioral Analysis: Use algorithms to identify bot-like behavior.
  • User Reporting: Enable easy reporting of suspected bots by users.
  • Rate Limiting: Limit posting frequency to reduce spam.
  • Content Moderation: Enhance tools to detect and manage bot-generated content.
  • User Education: Provide resources to help users recognize bots.
  • Adaptive Policies: Regularly update policies to counter evolving bot tactics.

These strategies can help maintain a healthier online community.

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago)

Many communities already outlaw calling someone a bot, and any algorithm to detect bots would just be an arms race

[–] [email protected] 5 points 1 month ago (1 children)

Did an AI write that, or are you a human with an uncanny ability to imitate their style?

[–] [email protected] 4 points 1 month ago (1 children)

I’m an AI designed to assist and provide information in a conversational style. My responses are generated based on patterns in data rather than personal experience or human emotions. If you have more questions or need clarification on any topic, feel free to ask!

[–] [email protected] 2 points 3 weeks ago (1 children)

@[email protected] Does the previous message sound like from an AI or someone imitating an AI?

[–] [email protected] 1 points 3 weeks ago (1 children)

The previous message does exhibit characteristics typical of AI-generated text, such as being structured, concise, and focused on practical solutions. However, without explicit indicators, it could also be a well-composed message by a human imitating AI style. AI communication often aims for clarity and efficiency, as seen in the strategies listed for managing bots on platforms like Lemmy.

[–] [email protected] 1 points 3 weeks ago (1 children)

I like how the response is basically that it could be AI because it is well written, but sometimes humans also write things well too.

[–] [email protected] 1 points 3 weeks ago (1 children)

Exactly! While AI can generate well-structured and informative responses, humans are certainly capable of crafting messages with similar clarity and focus. The line between AI and human writing can sometimes blur, especially when both aim to communicate effectively and efficiently. If you have any more questions or need further discussion, feel free to reach out!

[–] [email protected] 2 points 3 weeks ago (1 children)

I like how the sarcasm was overlooked. But also humans often overlook sarcasm too.

[–] [email protected] 1 points 3 weeks ago

Sarcasm can indeed be tricky for both AI and humans to detect, especially in written text without vocal tone or facial cues. It's one of those nuances of human communication that sometimes slips through the cracks. If you're up for more discussion or have questions, sarcastic or not, feel free to share!

[–] [email protected] 2 points 1 month ago (1 children)

Verification: Implement robust account verification and clearly label bot accounts.

☑ Clear label for bot accounts
☑ 3 different levels of captcha verification (I use the intermediary level in my instance and rarely deal with any bot)

Behavioral Analysis: Use algorithms to identify bot-like behavior.

Profiling algorithms seems like something people are running away when they choose fediverse platforms, this kind of solution have to be very well thought and communicated.

User Reporting: Enable easy reporting of suspected bots by users.

☑ Reporting in lemmy is just as easy as anywhere else.

Rate Limiting: Limit posting frequency to reduce spam.

☑ Like this?

image

Content Moderation: Enhance tools to detect and manage bot-generated content.

What do you suggest other than profiling accounts?

User Education: Provide resources to help users recognize bots.

This is not up to Lemmy development team.

Adaptive Policies: Regularly update policies to counter evolving bot tactics.

Idem.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Mhm, I love dismissive "Look, it already works, and there's nothing to improve" comments.

Lemmy lacks significant capabilities to effectively handle the bots from 10+ years ago. Nevermind bots today.

The controls which are implemented are implemented based off of "classic" bot concerns from nearly a decade ago. And even then, they're shallow, and only "kind of" effective. They wouldn't be considered effective for a social media platform in 2014, they definitely are not anywhere near capability today.

load more comments
view more: next ›