this post was submitted on 23 Jan 2025
937 points (97.8% liked)

Technology

68349 readers
4517 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 34 points 2 months ago (1 children)

This reminds me of that one time a guy figured out how to make "gzip bombs" that bricked automated vuln scanners.

[–] [email protected] 35 points 2 months ago* (last edited 2 months ago) (1 children)
[–] [email protected] 2 points 2 months ago (6 children)

DDoS? Where was the distribution part?

load more comments (6 replies)
[–] [email protected] 148 points 2 months ago (9 children)

This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.

[–] [email protected] 22 points 2 months ago (7 children)

Then that's a where we hide the good stuff

[–] [email protected] 1 points 2 months ago (2 children)
[–] [email protected] 3 points 2 months ago
[–] [email protected] 5 points 2 months ago (1 children)

Like stuff that is not bad.

[–] [email protected] 3 points 2 months ago

Rule out the mediocre too, unless it’s extremely mediocre then it’s OK

load more comments (6 replies)
[–] [email protected] 79 points 2 months ago* (last edited 2 months ago) (5 children)

Can confirm, I have a website (https://2009scape.org/) with tonnes of legacy forum posts (100k+). No crawlers ever go there.

It's a shame that 404media didn't do any due diligence when writing this

[–] [email protected] 7 points 2 months ago

Why would they? Outrage and meme content sell clicks, in-depth journalism doesn't.

[–] [email protected] 40 points 2 months ago

No crawlers ever go there.

if it makes you feel any better, i would go there if i was a web crawler.

[–] [email protected] 21 points 2 months ago (1 children)

2009scape!? If it's what I think it is that is amazing. Legend

[–] [email protected] 18 points 2 months ago

It is what you think it is, come join ^^. It's a small niche world

load more comments (2 replies)
load more comments (7 replies)
[–] [email protected] 67 points 2 months ago (3 children)

More accurately, it traps any web crawler, including regular search engines and benign projects like the Internet Archive. This should not be used without an allowlist for known trusted crawlers at least.

[–] [email protected] 21 points 2 months ago* (last edited 2 months ago) (1 children)

More accurately, it traps any web crawler

More accurately, it does not trap any competent crawlers, which have per domain limits on how many pages they crawl.

load more comments (1 replies)
[–] [email protected] 34 points 2 months ago (1 children)

Just put the trap in a space roped off by robots.txt - any crawler that ventures there deserves being roasted.

[–] [email protected] 2 points 2 months ago

Yup, put all the bad stuff into "not-robots.txt". Works every time.

[–] [email protected] 0 points 2 months ago (1 children)

How exactly would that work? Would trusted crawlers be blocked from accessing the maze?

[–] [email protected] 5 points 2 months ago (2 children)

You can tell what crawler its is by useragent header

[–] [email protected] 6 points 2 months ago (3 children)

Which can easily be faked.

load more comments (3 replies)
[–] [email protected] 1 points 2 months ago

Yeah and then you allowlist them by blacklisting them from the maze.

[–] [email protected] 34 points 2 months ago (1 children)
[–] [email protected] 6 points 2 months ago (1 children)

I haven't seen that episode in probably 15 years and I still remember exactly what this was.

[–] [email protected] 2 points 2 months ago (1 children)

First thing that popped into my head after I read the headline!

[–] [email protected] 3 points 2 months ago (1 children)

Can you explain for the rest of the class?

[–] [email protected] 7 points 2 months ago* (last edited 2 months ago) (2 children)
[–] [email protected] 2 points 2 months ago
[–] [email protected] 3 points 2 months ago (2 children)

I'm surprised no one has created a trek wiki separate from the shitty fandom site yet. Sometimes when I search for Doom info I accidentally click the fandom link and have to go back out to get the .org site.

[–] [email protected] 4 points 2 months ago

The Minecraft wiki has been way better since they ditched Fandom.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

I was aware of the two Doom wikis, but not the reason there was a split, and I've heard other complaints about fandom sites before. What's the deal with that? I'm out of the loop.

[–] [email protected] 8 points 2 months ago (1 children)

fandom.com has awful intrusive ads and a shitty slow website (probably largely because of the ads)

[–] [email protected] 6 points 2 months ago* (last edited 2 months ago)

Ah, that explains why I never had an issue with it, I use ublock origin.

blocked on this page: 69

Phew, you're not kidding! And the number keeps climbing the longer I leave it there. 84 now.

load more comments
view more: ‹ prev next ›