Sonic Internet - anyone have any experience?

99% of applications do not need high throughput, only the occasional bulk download will ever use even close to port speed. Application performance is a product of delivery (packet loss), latency, and jitter. It does work to market the 'benefits' of 10G speeds, but the reality is there is no benefit. I want to see the quality trends of the provider's network, but for some reason marketing doesn't share those :cool:

I couldn't agree more. Show me the average, min, and max latency from the CPE to the internet at the edge of your network across a period of time and uptime stats and I will be far more likely to buy your service than just "oh we offer 10G peak speeds woohoo". I'd say anything above 100Mbps is really "fine" for most people. I'd rather have 100M with 100% uptime than 10G with 95% uptime. Realistically the best is somewhere in between, maybe 1G with 99.95% uptime, so about 4 hours a year of downtime :)

The installer told me that other people on the main line wouldn't affect my speed. I questioned the same thing. The line comes to a pole a 1000ft. from my house. It terminates at a box that each house connects to. There are 3 of us connected the that switch.

There's always a bottleneck somewhere, but if done right, you'll never notice. It's likely a 10G uplink on a 48 port switch with each customer connected at 1G would be fine for typical home internet. 40G or 2x 10G is probably more realistic. But then you have another switch, let's say 24 ports, where all those 40G uplinks go and that could likely be served just fine by a single 100G uplink. After all, if everyone is streaming 4K video which uses what, 50Mbps, let's say we have 46 customers doing that, that's only 2.3Gbps, now multiple that by let's say 20 for the next set of the switches, that's 46Gbps... you have plenty of headroom to account for households where you might have 3 people streaming 4K at the same time or someone downloading a Steam game which can use a full 1Gbps of bandwidth.

The most challenging part IMO is once the customer leaves your network and hits the internet, but that's where internet exchanges and peering come in. Let's say hypothetically I visit a website that is hosted on a server at XYZ provider but their site is behind Cloudflare, and my hypothetical ISP peers with Cloudflare directly... Or you stream a video off YouTube and the ISP peers with Google... it's a win-win for both ends... your ISP isn't paying for you to use third party bandwidth and neither are Cloudflare or Google. Both companies just saved some money!

I work part time for a friends company, we do hosting, virtual servers, dedicated servers, colocation, enterprise connectivity. Networking is not my department, so I don't know much about it... but I can tell you, once you are dealing with a certain volume you can serve a TON of customers with what sounds like a small amount of bandwidth.

That's how a $10/mo VPS with shared 1Gbps connectivity can be profitable while delivering full gig speeds 95%+ of the time, or how Sonic can sell 10G internet for $50/mo and still deliver the advertised speeds. Even better if you are an ISP and host an Ookla speedtest server, 9/10 users won't change the default one haha.
 
The installer told me that other people on the main line wouldn't affect my speed. I questioned the same thing. The line comes to a pole a 1000ft. from my house. It terminates at a box that each house connects to. There are 3 of us connected the that switch.

At least with this one the pole is right in front of my parents' house. I was there on Sunday watching football with my dad and I saw one guy in a cherry picker connecting the line to the pole and another with a ladder placing it on the house. The only complaint my dad has is that they now have the line going over the roof to the equipment.

I guess this service is a passive optical network where there's no repeaters or anything and there will be several clients multiplexing on the same line. But I'm sure in a well-populated suburb they'll probably have a bundle of lines so it's not just one line serving hundreds of customers. Back when cable internet was fairly new, there was talk of the lines being congested because it was multiple customers on the same line. But they supposedly relieve congestion by using more lines.
 
I work part time for a friends company, we do hosting, virtual servers, dedicated servers, colocation, enterprise connectivity. Networking is not my department, so I don't know much about it... but I can tell you, once you are dealing with a certain volume you can serve a TON of customers with what sounds like a small amount of bandwidth.

Sure - for many users bandwidth isn't necessarily the biggest issue. Once I was working a short term job and rented a room in a house. I guess I could have theoretically commuted 90 miles (one way) every day but after a while that becomes too much.

But in the house I was sharing the internet and utilities. The primary tenant was playing games and had maybe a 100 ft cable going all the way to his room from downstairs. I asked him why he didn't just use Wi-Fi or possibly a Wi-Fi bridge (if his gaming equipment only had a wired port) and he said he needed to be wired for better response. It might have even been just an old 10 mbit/sec connection.
 
Networking is not my department, so I don't know much about it... but I can tell you, once you are dealing with a certain volume you can serve a TON of customers with what sounds like a small amount of bandwidth.

That's how a $10/mo VPS with shared 1Gbps connectivity can be profitable while delivering full gig speeds 95%+ of the time, or how Sonic can sell 10G internet for $50/mo and still deliver the advertised speeds. Even better if you are an ISP and host an Ookla speedtest server, 9/10 users won't change the default one haha.
It's call statistical aggregation and that's how providers make money. The tier 1 providers have many backbone links far > 1Tb/s, which are super expensive, but carry the traffic of tens of millions of endpoints which all pay a fee.
 
It's call statistical aggregation and that's how providers make money. The tier 1 providers have many backbone links far > 1Tb/s, which are super expensive, but carry the traffic of tens of millions of endpoints which all pay a fee.
So it's like our home network on a much bigger scale. The amount of bandwidth our homes need depends on how many devices will be connected to it. I only have about 5-6 devices that connect to my network so 500/500 serves me well.
 
So it's like our home network on a much bigger scale. The amount of bandwidth our homes need depends on how many devices will be connected to it. I only have about 5-6 devices that connect to my network so 500/500 serves me well.
The same idea, yeah.

What you want is to have fairly congestion free traffic both up and down. The problem I run into with 20Mb/s up and 1Gb/s down is the upload. When I sync a 1GB file from my work PC to OneDrive, my PC will completely congest the 20Mb/s upload shaper of my cable modem, then our voice and/or video conferences suffer from packet loss and jitter on the upload side. I call this "bumping your head on the shaper", most people just look at me like I'm weird when I say that. To keep this from happening, I set a shaper in the OneDrive clients that only allows a maximum of 15Mb/s upload. This mostly prevents our communications traffic from being randomly dropped by the input shaper of the cable modem.

I hear this tell-tale packet loss every day on conference calls when the speaker's PC is syncing files in the background. Sometimes I suggest configuring a shaper, mostly I don't bother.
 
Last edited:
  • Like
Reactions: Pew
99% of applications do not need high throughput, only the occasional bulk download will ever use even close to port speed. Application performance is a product of delivery (packet loss), latency, and jitter. It does work to market the 'benefits' of 10G speeds, but the reality is there is no benefit. I want to see the quality trends of the provider's network, but for some reason marketing doesn't share those :cool:

It might matter if it's a business connection at a place where a lot of people are simultaneously using Wi-Fi and/or wired internet. I'm thinking an Apple Store with dozens of customers trying out this or that must create a lot of demand on a single line. However, if there's any retail store that would need a dedicated fiber connection to a central office, an Apple Store would be the place.

I asked a question about testing the speeds since most do that with a computer or mobile device that's just a client and likely not capable with more than 1G connections. Other than maybe some specialty desktop computers. But I poked around the setup/diagnostics for a new Netgear Wi-Fi box and see that it's got a speed test built in connecting to speedtest.net. So I suppose something that could actually handle the speed could accurately test it. And obviously if there are dozens of simultaneous connections using bandwidth, one could theoretically be using up to 8-9 gbit/sec of bandwidth but multiplexing dozens of connections.

But even if you've got 50 homes doing something like 150 simultaneous video streams and large downloads, I can't imagine it's going to be more than maybe 1-2 gigabit/sec total. I can't think of anything that moves that amount of data other than dedicated internet speed testing.
 
It might matter if it's a business connection at a place where a lot of people are simultaneously using Wi-Fi and/or wired internet. I'm thinking an Apple Store with dozens of customers trying out this or that must create a lot of demand on a single line. However, if there's any retail store that would need a dedicated fiber connection to a central office, an Apple Store would be the place.

I asked a question about testing the speeds since most do that with a computer or mobile device that's just a client and likely not capable with more than 1G connections. Other than maybe some specialty desktop computers. But I poked around the setup/diagnostics for a new Netgear Wi-Fi box and see that it's got a speed test built in connecting to speedtest.net. So I suppose something that could actually handle the speed could accurately test it. And obviously if there are dozens of simultaneous connections using bandwidth, one could theoretically be using up to 8-9 gbit/sec of bandwidth but multiplexing dozens of connections.

But even if you've got 50 homes doing something like 150 simultaneous video streams and large downloads, I can't imagine it's going to be more than maybe 1-2 gigabit/sec total. I can't think of anything that moves that amount of data other than dedicated internet speed testing.

It's about aggregate throughput and ANY connection can be congested. I've seen > 3Tb/s connections that were congested from time to time.
 
It's about aggregate throughput and ANY connection can be congested. I've seen > 3Tb/s connections that were congested from time to time.

I always wondered what large public places like malls do for internet service. Lots of stores have Wi-Fi and they tend to have cafes that these days will have public Wi-Fi.

The worst public Wi-Fi experience I've ever had was at the Jackson Lake Lodge at Grand Teton National Park. This was back in 2006 even before the iPhone came out. Way before streaming services were ubiquitous. We were just walking through and saw dozens of people with laptop computers in the lobby (with their view of the Tetons and the Snake River). I asked and someone said they had public Wi-Fi. So we drove back to our cabin and I got my old iBook to just check on some things like my email and maybe do a little research for the rest of our trip. There was access to the internet, but if I were to guess it was so slow that I wouldn't have been surprised if it was a dialup connection shared between 50 people. Either that or they were using older consumer grade Wi-Fi equipment.
 
But even if you've got 50 homes doing something like 150 simultaneous video streams and large downloads, I can't imagine it's going to be more than maybe 1-2 gigabit/sec total. I can't think of anything that moves that amount of data other than dedicated internet speed testing.
And that's the difference between pooled and dedicated bandwidth and why the price difference is so massive.

At some offices I manage we have a 500/500 dedicated Rogers Enterprise fibre connection. This is not a pooled connection, it's our own pair, running back to the CO (we also have a /28). That's about $900/month up here.

So, if you are an ISP, you can run a 10Gbit fibre link into a neighbourhood and then sell it off as say 1.5Gbit to 100 homes at $80/month for $8K, so you are more than covering your bandwidth costs.

Cable companies work on this same principle (and now they market they are "fibre fed"). You have a large link that feeds into an area and then you over-subscribe the heck out of it to make the infra costs back and profit.
 
And that's the difference between pooled and dedicated bandwidth and why the price difference is so massive.

At some offices I manage we have a 500/500 dedicated Rogers Enterprise fibre connection. This is not a pooled connection, it's our own pair, running back to the CO (we also have a /28). That's about $900/month up here.

So, if you are an ISP, you can run a 10Gbit fibre link into a neighbourhood and then sell it off as say 1.5Gbit to 100 homes at $80/month for $8K, so you are more than covering your bandwidth costs.

Cable companies work on this same principle (and now they market they are "fibre fed"). You have a large link that feeds into an area and then you over-subscribe the heck out of it to make the infra costs back and profit.

It's easy enough to understand with analogies. Like gym memberships, where the prices charged couldn't possibly provide a profit if everyone with a membership showed up for 3 hours a day. Maybe a buffet restaurant where they're hoping that there are enough light eaters to make up for those who overindulge.

I remember when DSL was heavily in use because it made use of preexisting telephone lines. But it's horrible and considerably worse depending on the distance to the central office. I remember doing college lab exercises testing connections that weren't transmission lines and/or trying out different terminations and we could see the equipment showing all sorts of weird reflections and interference. But certainly a lot of technology going into DSL was about getting the most out of an environment that was never designed to transmit digital data and couldn't do it cleanly over more than a few feet of 24 gauge straight copper wire. That takes stuff like transmission lines or at least twisted pairs. DSL was always a way to avoid spending money on new infrastructure.

I've hinted at the old Pacific Bell (or SBC) "Laurel Lane" commercials where neighbors supposedly sharing a cable internet connection are accusing each other of being "web hogs".



 
Got bored and took apart a piece of the excess cable left behind. Just a single fiber strand surrounded by a PFTE jacket? But the bulk is two fibrous strands that I found are just there for mechanical strength and to keep the glass from being bent too much.
 
Just a single fiber strand surrounded by a PFTE jacket?
BiDi (Bi Directional) fiber. Different frequencies for TX and RX down a single piece of glass. Carriers are starting to use BiDi in access networks, because they effectively double the number for fibers available. Most new PON connections to homes are BiDi.
 
BiDi (Bi Directional) fiber. Different frequencies for TX and RX down a single piece of glass. Carriers are starting to use BiDi in access networks, because they effectively double the number for fibers available. Most new PON connections to homes are BiDi.

It was easy to pull apart the two outer segments to expose the fiber in the middle. I'm still wondering about multiple frequencies because I remember that the refractive index should be tuned for a specific frequency to achieve total internal reflection and isn't even constant throughout the glass (higher in the middle). Looked it up and I guess it's called "graded-index". May not be important for a few miles?
 
There are two bands of wavelengths that travel well in fiber due to low attenuation of the glass. They are around 1300 and 1500 nanometers. PON services use one for uplink and the other for downlink. Think in terms of different colors of light, though both of these wavelengths are far in the infrared and not visible. Since the wavelengths are far enough apart, a very simple optical filter included in each ONT can seperate them.

The mechanical load bearing strands in optical cable are made of Kevlar or a generic equivalent known as aramid polymer.
 
There are two bands of wavelengths that travel well in fiber due to low attenuation of the glass. They are around 1300 and 1500 nanometers. PON services use one for uplink and the other for downlink. Think in terms of different colors of light, though both of these wavelengths are far in the infrared and not visible. Since the wavelengths are far enough apart, a very simple optical filter included in each ONT can seperate them.

The mechanical load bearing strands in optical cable are made of Kevlar or a generic equivalent known as aramid polymer.

It seemed like a really oddball design to have two big bundles of reinforcing strands on either side of the fiber. But then I thought about it, and it's probably much easier to manufacture and terminate than cables with the reinforcing layer wrapping the fiber. I'm guessing that there are a lot of specialty tools for cutting and terminating different styles. I played around with them using scissors, diagonal cutting pliers, and a nail clipper.

I think this is it (or at least something close to it), although I'm sure the reinforcing strands aren't that yellow. It's the "Fastaccess" version where it's easy to pull apart the strand segments by hand.

https://nassaunationalcable.com/pro...ic-with-fastaccess-tech-cable-001eb4-14701df9

2_6dbbce7d-67fe-49f1-86b0-3565f402b168_1024x1024@2x.jpg
 
They are around 1300 and 1500 nanometers. PON services use one for uplink and the other for downlink.
We should also state that you should never under any circumstances point a fiber at your eye. There can be so much power transmitted down the fiber that it will instantly cook your retina and you'll be blind in that eye forever. I won't even qualify which types of fibers carry that much power, because you shouldn't attempt to differentiate "dangerous" and "not dangerous", just do not look into a fiber, ever. It's as stupid as looking down the barrel of a loaded gun.
 
That is important. Don't point a fiber at your eye. There's no reason to as the light is invisible.

Many phone cameras will sense the light from a live fiber as a faint purple or white glow. This is a safe way to confirm the path to the ISP is present.
 
We should also state that you should never under any circumstances point a fiber at your eye. There can be so much power transmitted down the fiber that it will instantly cook your retina and you'll be blind in that eye forever. I won't even qualify which types of fibers carry that much power, because you shouldn't attempt to differentiate "dangerous" and "not dangerous", just do not look into a fiber, ever. It's as stupid as looking down the barrel of a loaded gun.

I'm not worried about TOSLINK. But other than that, I don't know why anyone would be looking down any live optical fiber. Aren't there safety glasses for working with live fiber optics?
 
I don't know why anyone would be looking down any live optical fiber.
When you look at a connector to see if there is any dirt on the fiber, it's really easy to point it at your eye. You have to look at the end at an angle so none of the light hits your eye.
 
Back
Top Bottom