this post was submitted on 23 Jan 2025
895 points (97.9% liked)

Technology

60809 readers
3581 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 32 points 22 hours ago (1 children)

This reminds me of that one time a guy figured out how to make "gzip bombs" that bricked automated vuln scanners.

[–] [email protected] 31 points 21 hours ago (7 children)

I had a scanner that was relentless smashing a server at work and configured one of those.

evidently it was one of our customers. it filled their storage up and increased their storage costs by like 500%.

they complained that we purposefully sabotaged their scans. when I told them I spent two weeks tracking down and confirmed their scan were causing performance issues on our infrastructure I had every right to protect the experience of all our users.

I also reminded them they were effectively DDOSing our services which I could file a request to investigate with cyber crimes division of the FBI.

they shut up, paid their bill, and didn't renew their measly $2k mrr account with us when their contract ended.

bitch ass small companies are always the biggest pita.

load more comments (7 replies)
[–] [email protected] 141 points 22 hours ago (9 children)

This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.

[–] [email protected] 22 points 19 hours ago (9 children)

Then that's a where we hide the good stuff

load more comments (9 replies)
[–] [email protected] 80 points 22 hours ago* (last edited 22 hours ago) (5 children)

Can confirm, I have a website (https://2009scape.org/) with tonnes of legacy forum posts (100k+). No crawlers ever go there.

It's a shame that 404media didn't do any due diligence when writing this

[–] [email protected] 7 points 18 hours ago

Why would they? Outrage and meme content sell clicks, in-depth journalism doesn't.

[–] [email protected] 40 points 21 hours ago

No crawlers ever go there.

if it makes you feel any better, i would go there if i was a web crawler.

[–] [email protected] 21 points 21 hours ago (1 children)

2009scape!? If it's what I think it is that is amazing. Legend

[–] [email protected] 18 points 21 hours ago

It is what you think it is, come join ^^. It's a small niche world

load more comments (2 replies)
load more comments (7 replies)
[–] [email protected] 63 points 23 hours ago (3 children)

More accurately, it traps any web crawler, including regular search engines and benign projects like the Internet Archive. This should not be used without an allowlist for known trusted crawlers at least.

[–] [email protected] 21 points 18 hours ago* (last edited 18 hours ago) (1 children)

More accurately, it traps any web crawler

More accurately, it does not trap any competent crawlers, which have per domain limits on how many pages they crawl.

load more comments (1 replies)
[–] [email protected] 31 points 19 hours ago (1 children)

Just put the trap in a space roped off by robots.txt - any crawler that ventures there deserves being roasted.

[–] [email protected] 2 points 18 hours ago

Yup, put all the bad stuff into "not-robots.txt". Works every time.

[–] [email protected] 0 points 21 hours ago (1 children)

How exactly would that work? Would trusted crawlers be blocked from accessing the maze?

[–] [email protected] 5 points 21 hours ago (2 children)

You can tell what crawler its is by useragent header

[–] [email protected] 6 points 19 hours ago (3 children)

Which can easily be faked.

load more comments (3 replies)
[–] [email protected] 1 points 19 hours ago

Yeah and then you allowlist them by blacklisting them from the maze.

[–] [email protected] 33 points 1 day ago (1 children)
[–] [email protected] 6 points 22 hours ago (1 children)

I haven't seen that episode in probably 15 years and I still remember exactly what this was.

[–] [email protected] 2 points 22 hours ago (1 children)

First thing that popped into my head after I read the headline!

[–] [email protected] 3 points 21 hours ago (1 children)

Can you explain for the rest of the class?

[–] [email protected] 6 points 21 hours ago* (last edited 21 hours ago) (2 children)
[–] [email protected] 2 points 17 hours ago
[–] [email protected] 3 points 20 hours ago (2 children)

I'm surprised no one has created a trek wiki separate from the shitty fandom site yet. Sometimes when I search for Doom info I accidentally click the fandom link and have to go back out to get the .org site.

[–] [email protected] 4 points 18 hours ago

The Minecraft wiki has been way better since they ditched Fandom.

[–] [email protected] 3 points 20 hours ago* (last edited 20 hours ago) (1 children)

I was aware of the two Doom wikis, but not the reason there was a split, and I've heard other complaints about fandom sites before. What's the deal with that? I'm out of the loop.

[–] [email protected] 8 points 19 hours ago (1 children)

fandom.com has awful intrusive ads and a shitty slow website (probably largely because of the ads)

[–] [email protected] 6 points 19 hours ago* (last edited 19 hours ago)

Ah, that explains why I never had an issue with it, I use ublock origin.

blocked on this page: 69

Phew, you're not kidding! And the number keeps climbing the longer I leave it there. 84 now.

load more comments
view more: ‹ prev next ›