This reminds me of that one time a guy figured out how to make "gzip bombs" that bricked automated vuln scanners.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
I had a scanner that was relentless smashing a server at work and configured one of those.
evidently it was one of our customers. it filled their storage up and increased their storage costs by like 500%.
they complained that we purposefully sabotaged their scans. when I told them I spent two weeks tracking down and confirmed their scan were causing performance issues on our infrastructure I had every right to protect the experience of all our users.
I also reminded them they were effectively DDOSing our services which I could file a request to investigate with cyber crimes division of the FBI.
they shut up, paid their bill, and didn't renew their measly $2k mrr account with us when their contract ended.
bitch ass small companies are always the biggest pita.
This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.
Can confirm, I have a website (https://2009scape.org/) with tonnes of legacy forum posts (100k+). No crawlers ever go there.
It's a shame that 404media didn't do any due diligence when writing this
Why would they? Outrage and meme content sell clicks, in-depth journalism doesn't.
No crawlers ever go there.
if it makes you feel any better, i would go there if i was a web crawler.
2009scape!? If it's what I think it is that is amazing. Legend
It is what you think it is, come join ^^. It's a small niche world
More accurately, it traps any web crawler, including regular search engines and benign projects like the Internet Archive. This should not be used without an allowlist for known trusted crawlers at least.
More accurately, it traps any web crawler
More accurately, it does not trap any competent crawlers, which have per domain limits on how many pages they crawl.
Just put the trap in a space roped off by robots.txt - any crawler that ventures there deserves being roasted.
Yup, put all the bad stuff into "not-robots.txt". Works every time.
How exactly would that work? Would trusted crawlers be blocked from accessing the maze?
You can tell what crawler its is by useragent header
Yeah and then you allowlist them by blacklisting them from the maze.
I haven't seen that episode in probably 15 years and I still remember exactly what this was.
First thing that popped into my head after I read the headline!
Can you explain for the rest of the class?
Thank you!
I'm surprised no one has created a trek wiki separate from the shitty fandom site yet. Sometimes when I search for Doom info I accidentally click the fandom link and have to go back out to get the .org site.
The Minecraft wiki has been way better since they ditched Fandom.
I was aware of the two Doom wikis, but not the reason there was a split, and I've heard other complaints about fandom sites before. What's the deal with that? I'm out of the loop.
fandom.com has awful intrusive ads and a shitty slow website (probably largely because of the ads)
Ah, that explains why I never had an issue with it, I use ublock origin.
blocked on this page: 69
Phew, you're not kidding! And the number keeps climbing the longer I leave it there. 84 now.