this post was submitted on 06 May 2024
1 points (100.0% liked)

Fediverse

17688 readers
2 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 4 years ago
MODERATORS
 

The blog Its FOSS has 15,000 followers for its Mastodon account — which they think is causing problems:

When you share a link on Mastodon, a link preview is generated for it, right? With Mastodon being a federated platform (a part of the Fediverse), the request to generate a link preview is not generated by just one Mastodon instance. There are many instances connected to it who also initiate requests for the content almost immediately. And, this "fediverse effect" increases the load on the website's server in a big way.

Sure, some websites may not get overwhelmed with the requests, but Mastodon does generate numerous hits, increasing the load on the server. Especially, if the link reaches a profile with more followers (and a broader network of instances)... We tried it on our Mastodon profile, and every time we shared a link, we were able to successfully make our website unresponsive or slow to load.

It's Foss blog says they found three GitHub issues about the same problem — one from 2017, and two more from 2023. And other blogs also reported the same issue over a year ago — including software developer Michael Nordmeyer and legendary Netscape programmer Jamie Zawinski.

And back in 2022, security engineer Chris Partridge wrote:

[A] single roughly ~3KB POST to Mastodon caused servers to pull a bit of HTML and... an image. In total, 114.7 MB of data was requested from my site in just under five minutes — making for a traffic amplification of 36704:1. [Not counting the image.]

Its Foss reports Mastodon's official position that the issue has been "moved as a milestone for a future 4.4.0 release. As things stand now, the 4.4.0 release could take a year or more (who knows?)."

They also state their opinion that the issue "should have been prioritized for a faster fix... Don't you think as a community-powered, open-source project, it should be possible to attend to a long-standing bug, as serious as this one?"

Abstract credit: https://slashdot.org/story/428030

all 37 comments
sorted by: hot top controversial new old
[–] [email protected] 0 points 6 months ago (1 children)

If I understand right that means link previews are requested every single time an user sees it? The instance should request it once a week, cache it and serve that to users

[–] [email protected] 0 points 6 months ago

I believe instances generate the preview as soon as it's federated. The problem is that if you have many followers, each of their instances will try to generate a preview at the same time

[–] [email protected] 0 points 6 months ago (1 children)

Just fucking cache.

If a GET request is breaking your server, you're doing something horribly wrong.

[–] [email protected] 0 points 6 months ago* (last edited 6 months ago) (1 children)

It's about amplification attack.

[–] [email protected] 0 points 6 months ago (1 children)
[–] [email protected] 0 points 6 months ago

Next stage: doesn't apply to DNS

For context DNS amplification factor is about 150.

[–] [email protected] 0 points 6 months ago

In the comments on the article people have debugged their cloudflare/caching configuration for them and told them what they're doing wrong.

[–] [email protected] 0 points 6 months ago (1 children)

Foss project: has 100 open issues

A year passes

Foss project: 50 issues got resolved, 50 new ones have been opened in the meantime

Why hasn't this giant project fixed a single bug?

[–] [email protected] 0 points 6 months ago (1 children)

This issue has been noted since mastodon was initially release > 7 years ago. It has also been filed multiple times over the years, indicating that previous small "fixes" for it haven't fully fixed the issue.

[–] [email protected] 0 points 6 months ago (2 children)

I'm sure an affected website could have paid a web developer to find a solution to this issue in the past 7 years if it was that important to them.

[–] [email protected] 0 points 6 months ago (1 children)

People have submitted various fixes but the lead developer blocks them. Expecting owners of small personal websites to pay to fix bugs of any random software that hits their site is ridiculous. This is mastodon's fault and they should fix it. As long as the web has been around, the expected behavior has been for a software team to prioritize bugs that affect other sites.

[–] [email protected] 0 points 6 months ago (1 children)

If they don't want to pay to fix it, they can just block the user agent (or just fix their website, this issue is affecting them so much mainly because they don't cache).

Relying on the competence of unaffiliated developers is not a good way to run a business.

[–] [email protected] 0 points 6 months ago

Relying on the competence of unaffiliated developers is not a good way to run a business.

This affects any site that's posted on the fediverse, including small personal sites. Some of these small sites are for people who didn't set the site up themselves and don't know how or can't block a user agent. Mastodon letting a bug like this languish when it affects the small independent parts of the web that mastodon is supposed to be in favor of is directly antithetical to its mission.

[–] [email protected] 0 points 6 months ago (1 children)

Or probably pay an extra $5 for the better hosting plan.

[–] [email protected] 0 points 6 months ago (1 children)
[–] [email protected] 0 points 6 months ago (1 children)

They say they do in the article.

[–] [email protected] 0 points 6 months ago

Then they aren't using it properly

[–] [email protected] 0 points 6 months ago (1 children)

They also state their opinion that the issue “should have been prioritized for a faster fix… Don’t you think as a community-powered, open-source project, it should be possible to attend to a long-standing bug, as serious as this one?”

It's crazy how every single entity who has any issue with any free software project always seems to assume their needs should be prioritized.

[–] [email protected] 0 points 6 months ago (1 children)

Well, the users collectively should dictate the priorities.

[–] [email protected] 0 points 6 months ago (1 children)

Why should they? The users of a free software project aren't entitled to anything.

If users want to dictate priorities they should become developers, and if they can't/won't at least try to support them financially.

[–] [email protected] 0 points 6 months ago

Because democracy

[–] [email protected] 0 points 6 months ago (1 children)

There's no reason why 114MB of static content over 5 minutes should be an issue for a public facing website. Hell, I probably could serve that and the images with a Raspberry Pi over my home Internet and still have bandwidth to spare.

I think they are throwing stones at the wrong glass house/software stack.

[–] [email protected] 0 points 6 months ago (1 children)

It is not, but a write amplification of 36704:1 is one hell of an exploitable surface.

With that same Raspberry Pi and a single 1gbit connection you could also do 333333 post requests of 3 KB in a single second made on fake accounts with preferably a fake follower on a lot of fediverse instances. That would result in those fediverse servers theoretically requesting 333333 * 114MB = ~38Gigabyte/s. At least for as long as you can keep posting new posts for a few minutes and the servers hosting still have bandwidth. DDosing with a 'botnet' of fediverse servers/accounts made easy!

I'm actually surprised it hasn't been tried yet now that I think about it...

[–] [email protected] 0 points 6 months ago (1 children)

That would result in those fediverse servers theoretically requesting 333333 * 114MB = ~38Gigabyte/s.

On the other hand, if the site linked would not serve garbage, and would fit like 1Mb like a normal site, then this would be only ~325mb/s, and while that's still high, it's not the end of the world. If it's a site that actually puts effort into being optimized, and a request fits in ~300kb (still a lot, in my book, for what is essentially a preview, with only tiny parts of the actual content loaded), then we're looking at 95mb/s.

If said site puts effort into making their previews reasonable, and serve ~30kb, then that's 9mb/s. It's 3190 in the Year of Our Lady Discord. A potato can serve that.

[–] [email protected] 0 points 6 months ago

autistic complaining about unitsok so like I don't know if I've ever seen a more confusing use of units . at least you haven't used the p infix instead of the / in bandwith units .

like you used both upper case and lowercase in units but like I can't say if it was intentional or not ? especially as the letter that is uppercased should be uppercased ?

anyway

1Mb

is theoretically correct but you likely ment either one megabyte (1 MB) or one megibyte (MiB) rather than one megabit (1 Mb)

~325mb/s

95mb/s

and

9mb/s

I will presume you did not intend to write ~325 milibits per second , but ~325 megabits per seconds , though if you have used the 333 333 request count as in the segment you quoted , though to be fair op also made a mistake I think , the number they gave should be 3 exabits per second (3 Eb/s) or 380 terabytes per seconds (TB/s) , but that's because they calculated the number of requests you can make from a 1 gigabit (which is what I assume they ment by gbit) wrong , forgetting to account that a byte is 8 bits , you can only make 416 666 of 4 kB (sorry I'm not checking what would happen if they ment kibibytes sorry I underestimated how demanding this would be but I'm to deep in it now so I'm gonna take that cop-out) requests a second , giving 380 terabits per second (380 Tb/s) or 3.04 terabytes per second (3.04 TB/s) , assuming the entire packet is exactly 114 megabytes (114 MB) which is about 108.7 megibytes (108.7 MiB) . so anyway

packet size theoretical bandwidth
1 Mb 416.7 Gb/s 52.1 GB/s
1 MB 3.3 Tb/s 416.7 GB/s
1 MiB 3.3 Tb/s 416.7 GB/s
300 kb 125.0 Gb/s 15.6 GB/s
300 kB 1000.0 Gb/s 125.0 GB/s
300 kiB 1000.0 Gb/s 125.0 GB/s
30 kb 12.5 Gb/s 1.6 GB/s
30 kB 100.0 Gb/s 12.5 GB/s
30 kiB 100.0 Gb/s 12.5 GB/s

hope that table is ok and all cause im in a rush yeah bye

[–] [email protected] 0 points 6 months ago (1 children)

...and here I am, running a blog that if it gets 15k hits a second, it won't even bat an eye, and I could run it on a potato. Probably because I don't serve hundreds of megabytes of garbage to visitors. (The preview image is also controllable iirc, so just, like, set it to something reasonably sized.)

[–] [email protected] 0 points 6 months ago (1 children)

Wait, you're going to tell me you don't actually have to serve bloat on a blog like it's foss? No way!

[–] [email protected] 0 points 6 months ago (2 children)

I only serve bloat to AI crawlers.

map $http_user_agent $badagent {
  default     0;
  # list of AI crawler user agents in "~crawler 1" format
}

if ($badagent) {
   rewrite ^ /gpt;
}

location /gpt {
  proxy_pass https://courses.cs.washington.edu/courses/cse163/20wi/files/lectures/L04/bee-movie.txt;
}

...is a wonderful thing to put in my nginx config. (you can try curl -Is -H "User-Agent: GPTBot" https://chronicles.mad-scientist.club/robots.txt | grep content-length: to see it in action ;))

[–] [email protected] 0 points 6 months ago (1 children)

Proxying to a remote server every time is pretty inconsiderate. You can save yourself (and washington.edu) a lot of bandwidth by downloading a copy of your own and compressing it (using gz, brotli, or zstd). Bonus points for picking the least decompression friendly compression algorithm to make decompressing their problem!

[–] [email protected] 0 points 6 months ago (2 children)

Or serve a gzip bomb (is that possible?)

[–] [email protected] 0 points 6 months ago

Depends on what compression options the client supports. gzip isn't great for setting up a bomb, and neither is brotli in most cases. zstd is relatively new on the web stage, but zstd support lacks the higher decompression levels in browsers.

Using existing bombs, you'll be able to make the scraper use more RAM than normal, but probably not much more than they already have available for fetching + rendering a modern Javascript based web page.

[–] [email protected] 0 points 6 months ago (1 children)
[–] [email protected] 0 points 6 months ago (1 children)

I don't think serving 86 kilobytes to AI crawlers will make any difference in my bandwidth use :)

[–] [email protected] 0 points 6 months ago (1 children)
[–] [email protected] 0 points 6 months ago

It's not. It just doesn't get enough hits for that 86k to matter. Fun fact: most AI crawlers hit /robots.txt first, they get served a bee movie script, fail to interpret it, and leave, without crawling further. If I'd let them crawl the entire site, that'd result in about two megabytes of traffic. By serving a 86kb file that doesn't pass as robots.txt and has no links, I actually save bandwidth. Not on a single request, but by preventing a hundred others.