Requirements for 5G are getting equally ridiculous [1]. 5 ms latency and 100 devices per square meter? Whenever I hear something like that, I like to outline a meter by meter of empty floor space and ask people what they imagine 100 devices will be doing in that particular spot.
What are you really going to do with a 1gbps residential connection, that doesn't sound as tentative and outlandish as my plan to push a shopping cart full of iPhones streaming their own episode of GoT down the street? (it's art, don't ask)
This isn't about having a feasible plan to exhaust the new standard using current technology and use cases, it's about pushing the new standard as far as possible (within technological limits), so it can last as long as possible without needing to be redesigned. Also, reality has a nasty habit of catching up to the possible much faster than anticipated.
It's funny how each standard is made to "last as long as possible without needing to be redesigned". LTE stands for "long term evolution" - as in "we made this standard so that it will be possible to evolve it according to demands without total redesign". It has been in commercial use for what, 4 years? As far as I know, current plans for 5G are "screw LTE, we'll redesign it from the ground up".
Maybe I'm miscalculating, but I think you're off by an order of magnitude.
According to [1], a dense open space office in Manhattan has 100-120ft² per employee; let's take 108ft² which is ~= 10m². That means each m² has ~1/10 of an employee. Even if you multiply by 100 floors, you're only getting ~10 employees/m².
I think your miscalculation might be devices per person, not people per square meter.
My desk area in Tokyo (so maybe not 120ft²; let's call it 60ft²) has a whole lot more devices than people. 4 computers + personal/work phones + 1 watch + 3 tablets + wireless camera + let's say 5 other random items with an IP address that might be paying a visit (think health/fitness devices, teledildonics, wifi-enabled geiger counters, etc.).
I may not be typical in April 2016, but I'm also not that much of an outlier. 10+ devices per person seems to be the direction we are heading, so... at 100 floors, yeah 100+ devices per square meter.
While I do think you're an outlier, remember that we're specifically talking about 5G-enabled devices. "Wifi-enabled geiger counters" don't really, er, count :)
>I may not be typical in April 2016, but I'm also not that much of an outlier. 10+ devices per person seems to be the direction we are heading, so... at 100 floors, yeah 100+ devices per square meter.
Not sure where you get this "100 devices per square meter" again.
Even if it was 10+ devices per person (which is at best 3 or 4), there are hardly more than 1 or 2 persons per square meter still.
And across floors there are one or several meters of empty space -- above people's heads and before the ceiling.
Well, the example I was replying to was for a 100-story building, with a total of ~10 employees/m².
I think this absurdly specific hypothetical building is getting in the way my point, which is simply that we humans have an increasing number of networked devices on and around us, and ~10 devices is already common for some people.
Yeah, I know, but that's the specific calculation I was replying to — icebraining's figure of people per square meter of land.
This subthread veered off from the original linked post about broadband speed in general to discuss how upcoming "5G" cellular service is expected[1] to support 100 devices per square meter - that means per square meter of coverage area, regardless of how many floors are stacked up on that square meter. E.g., Mbps/m².
Telcos hate low latency. High latency is in their DNA. Nobody has ever been able to sell low latency except to HFT people, and until they started asking, nobody knew what a circuitous route the cables took from Manhattan to New Jersey.
The first research project Bell labs did was a test to see how much latency they could get telephone users to tolerate.
It makes sense just fine. The research about latency tolerance informed the design constraints for packet-switched digital phone networks, which had been proposed since the 1950s and began to replace POTS in practice in the 1980s (e.g. System X in London in 1980).
Latency requirements constrain routing and buffering specs. Packet switched voice could not work until the system could meet human tolerance of latency.
While that is true, at a certain point, more latency in a working system costs money, especially at the earlier switched systems
But yeah, they needed to do a research about it, but I would doubt "The first research project Bell labs did", since Bell Labs predates digital phone networks
Sure, but no-one is ever talking about deliberately over-long latency. There's a sweet spot where packet-switched is fast enough, while cheap because of shared bandwidth and routing flexibility. It costs more to go faster, but it's game over to go slower than humans will accept.
And it's unlikely that the poster was referring to direct (non-packet) connections since these have been rare for a long time.
i understand that, just saying that not every country has these insane distances like the US, as even most US content is mirrored on CDNs that are usually pretty close.
I think the mirrored thing is mostly for assets -- js, images etc.
For most smaller websites it's one server somewhere (for the majority of them actually co-hosted with other sites in a single location/box).
And for most large websites outside of the very big (e.g. FB and Google), it's at best a few locations around the world, e.g. e.g. US east/west/mid, 1 in Europe, one in Asia, etc, and often even less. So the distances involved do get large there too.
Could quantum physics hold the solution? I keep a particle here that I move around and you have it's pair particle on the East coast and watch it wiggle. Somehow turning that into bit flips.
It does not. At a practical level, you can't "wiggle" one entangled particle to be in any particular classical state without destroying the entanglement.
At a theoretical level, the no-communication theorem in quantum mechanics forbids it absolutely:
The speed of limit limit for US coast to coast (Boston to LA) is ~20ms.
Roundtrip, since you also need to ask for something, is double that.
And that's just the speed of light.
Forget <10ms latency as long as physics stand, even for Illinois to California.