1052
this post was submitted on 09 May 2024
1052 points (99.8% liked)
Technology
59429 readers
3457 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's only really true of you're relatively close to each other on the same ISP. The father apart and the more hops you need to make the less likely it becomes, through no fault of your ISP.
There's always going to be some level of loss and retransmission. It would take a perfect stream of UDP, since TCP needs acknowledgements in order to continue sending data. That can be reduced by window scaling and multiplexing, but it's still going to happen.
Distance would add latency but shouldn't reduce speed on well maintained infrastructure.
Incorrect, and that was exactly my point
This is like saying that if the fruit at a store is rotten sometimes, it's not the grocer's fault, because the fruit had to come a long way and went bad in transit. The exact job you are paying the ISP for, is to deal with the hops and give you good internet. It's actually a lot easier at the trunk level (because the pipes are bigger and more reliable and there are more of them / more redundancy and predictability and they get more attention.)
I won't say there isn't some isolated exception, but in reality it's a small small small minority of the time. Take an internet connection that's having difficulty getting the advertised speed and run mtr or something, and I can almost guarantee that you'll find that the problem is near one or the other of the ends where there's only one pipe and maybe it's having hardware trouble or individually underprovisioned or something.
Actually Verizon deliberately underprovisioning Netflix is the exception that proves the rule -- that was a case where it actually was an upstream pipe that wasn't big enough to carry all the needed traffic, but it was perfectly visible to them and they could easily have solved it if they wanted to, and chose not to, and the result was visibly different from normal internet performance in almost any other case.
I probably should've been a little clearer that I'm taking scales of thousands of km here.
I'm on an island in the North Atlantic. I don't hold it against my ISP if I can't get my full 1.5Gbps down from services hosted in California.
Yeah, makes sense, that's a little different. In that case there is actually congestion on the trunk that makes things slow for the customers.
My point I guess is that the people who want to sell a "fast lane" to their customers, or want to say Net Neutrality is the reason your home internet is slow when you're accessing North America, are lying. Neutrally-applied traffic shaping to make things work is allowed, of course; just want to throttle their competitors and they're annoyed that the government is allowed to tell them not to.
Ehhh, I get what you are saying but I would rephrase the above poster's comment a little then. If a person is paying for 100Mbps and they are able to get/find a source or some combination of sources that are able to supply them 100mbps of data then that's what they should be getting. The easiest example being a torrent for popular Linux distros.
I personally think the solution to that should be some kind of regulatory minimum around the advertisement of speed or contractual service obligation. For example if a person pays for a 100Mbps connection then the ISP should be required to supply that speed at +/- 5% instantaneous and -.5% on average (because if you give them a range you know they will maintain the lowest possible speed to be in compliance).
Don't look too hard at my numbers, I pulled them out of my ass, but hopefully it gets across the idea.
Keep in mind that because few residential users max out capacity simultaneously the ISPs "overbook" capacity, and usually this works out because they have solid stats on average use and usually few people need the max capacity simultaneously.
Of course some ISPs are greedier than others and do it to the extreme where the uplink/downlink is regularly maxed out without giving anything near the promised bandwidth to a significant fraction of customers. The latter part should be disincentivized.
Force the ISPs to keep stats on peak load and how frequently their customers are unable to get advertised bandwidth, and if it's above some threshold it should be considered comparable to excess downtime, and then they should be forced to pay back the affected customers. The only way they can avoid losing money is by either changing their plans to make a realistic offer or by building out capacity.
Yeah, I wish we'd do this.
I have a good ISP that has worked properly pretty much every time I've tested it (usually a few times/year, and usually during peak hours). But I've had bad ISPs where I've never gotten the advertised speed (best I got was 15% less than advertised, but it was usually 30% or more less).