this post was submitted on 08 Jul 2025
32 points (97.1% liked)
Technology
39535 readers
282 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Definitely something I've observed even here. Luckily we get few applications and there is a report button, but I share the author's frustration and the author's jaded view of a limited timeline on services such as ours being tenable. Eventually it will be trivially easy to flood this place with slop.
Question: what would happen if the server implemented something like Anubis on the application and/or create post pages? Would that not block most bots from completing these forms? Is that just not feasible at our scale?
I think Anubis is really focused on scraper-bots feeding AI models, rather than posting bots. It's based on requests to non-standard endpoints in your own app, which you specify for Anubis in a couple places (e.g. leaving out of /robots.txt or /.well-known).
If you're using e.g. a python bot that uses headless chromium executing JS to post stuff, you're probably going to code in known-good endpoints for comments and posts, rather than hitting random ones like a scraper bot would.
Anubis is good for stopping the n-request-per-second spamming of scrapers, but not so much for just blocking non-human bots that post at normal rates.
My last employer was a Fortune 50, and we did automation detection through behavioral mapping, like posting locations, times, and even word patterns (a very cool experimental project that I got to work on, which used a database of normalized English word frequency to detect bots based on language that was too-similar across users, or even too "perfect", though this was only used as an indicator and never considered definitive). It is extremely difficult to detect human-impersonating bots based on raw network traffic alone.