Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?

  • MrFunnyMoustache@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    The lag has several components. Input lag between the peripherals and your computer, the network transmissions to the server, the regular rendering of the game, live transcoding the game, the network again, decoding the stream on your device. The rest are pretty much insignificant.

    The biggest way to reduce lag I can think of is if the server is literally in your city, and the connection between it and you have the least amount of nodes between you and the server. Some video streaming services will partner with ISPs to put their servers in the same place to reduce overhead and improve the user experience. I’d assume that gaming would benefit from that too, but this is harder to implement since.

    Another way to improve networking lag is by prioritising game streaming data over other data, QoS (quality of service), is really important both for the home network and on the ISP side.

    This should be obvious, but don’t use a VPN.

    For the video transcoding, it can be pretty quick, but having dedicated hardware like NVENC would be faster than using the CPU, not just in terms of FPS, but also in latency if given the same FPS (through FPS cap).

    Higher FPS. The more frames per second, the lower the input lag, though it only matters if you eliminate network lag first.

    I should mention that I have never used any game streaming service, and I don’t have the equipment to test lag either.

  • Blake [he/him]@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Theoretically, the latency between the streamer and viewers could be zero or near zero.

    For playing games online, the minimum possible latency is the speed of light delay. We’re pretty much already at the limit for that one, and we’re even using a lot of pretty clever techniques to mitigate latency such as lag compensation.

    • NotAnArdvark@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Ooh, we’re not at the speed of light as a limit yet, are we? Do you mean “point A to point B” on fibre, or do you actually mean full on “routed-over-the-internet”? Even with fibre (which is slower than the speed of light), you’re never going in a straight line. And, at least where I live, you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly.

      • Blake [he/him]@feddit.uk
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Even with fibre (which is slower than the speed of light)

        This makes no sense. Are you referring to the speed of light in a vacuum? Fiber transmits data using photons which travel at the speed of light. While, yes, there is often some slowing of signals depending on whether the fiber is single-mode or multi-mode and whether the fiber has intentionally been doped, it’s close enough to the theoretical maximum speed that it’s not really worth splitting hairs (heh) over

        There are additionally some delays added during signal processing (modulation and demodulation from the carrier to layer 3) but again this is so fast at this point it’s not really conceivably going to get much faster.

        The bottleneck really is contention vs. throughput, rather than the media or modulation/demodulation slash encoding/decoding.

        At least to the best of my knowledge!

        you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

        That’s generally not how routing works - your packets might take different routes depending on different conditions. Just like how you might take a different road home if you know that there’s roadworks or if the schools are on holiday, it can be genuinely much faster for your packets to take a diversion to avoid, say, a router that’s having a bad day.

        Routing protocols are very advanced and capable, taking many metrics into consideration for how traffic is routed. Under ideal conditions, yes, they’d take the physically shortest route possible, but in most cases, because electricity moves so fast, it’s better to take a route that’s hundreds of miles longer to avoid some router that got hacked and is currently participating in some DDoS attack.

        • Dark Arc@social.packetloss.gg
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          That’s generally not how routing works

          It is how it works … mostly because what they’re talking about is the fact that the Internet (at least in the US) is not really set up like a mesh at the ISP level. It’s somewhere between "mesh " and “hub and spoke” where lots of parties that could talk directly to each other don’t (because nobody ever put down the lines and setup the routing equipment to connect two smaller ISPs or customers directly).

          https://www.smithsonianmag.com/smart-news/first-detailed-public-map-us-internet-infrastructure-180956701/

          • Blake [he/him]@feddit.uk
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            There’s absolutely nothing wrong with that topology - the fact that you seem to think that the design is a bad thing really demonstrates your lack of understanding here.

            For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

            I don’t want to get into the specifics, but in general, the more networks a router is connected to, the less efficient it is overall.

            The propagation delay is pretty insignificant for most routers. Carrier grade routers like those at the core of the internet can handle up to 43 billion packets per second, another hop is absolutely nothing in terms of delay.

            • Dark Arc@social.packetloss.gg
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 months ago

              For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

              Well daisy chaining would be outright insanity … I’m not even sure why you’d jump to something that insane … my internet connection doesn’t need to depend on the guy down the street.

              Making an optimally dense mesh network (and to be clear, I mean a partially connected mesh topology with more density than the current situation … which at a high level is already a partially connected mesh topology) would not be optimally cost effective … that’s it.

              the more networks a router is connected to, the less efficient it is overall. another hop is absolutely nothing in terms of delay.

              Do you not see how these are contradictory statements?

              Yeah, you’d need more routers, you have more lines. But you could route more directly between various points. e.g., there could be at least one major transmission line between each state and its adjacent states to minimize the distance a packet has to physically travel and increase redundancy. It’s just more expensive and there’s typically not a need.

              This stuff happens in more population dense areas because there’s more data, and more people, direct connections make more sense. It’s just money, it’s not that somehow not having fewer lines through the great plains makes the internet faster… Your argument and your attitude is something else. I suspect we’re just talking past each other, but w/e.

              • Blake [he/him]@feddit.uk
                link
                fedilink
                arrow-up
                0
                ·
                9 months ago

                I’m becoming more and more convinced that you don’t really know what you’re talking about. Are you a professional network engineer or are you just a hobbyist?

                • Dark Arc@social.packetloss.gg
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  9 months ago

                  I wear a lot of hats professionally; mostly programming. I don’t do networking on a day-to-day basis though if that’s what you’re asking.

                  If you’ve got something actually substantive to back up your claim that (if money was no object) the current topology is totally optimal for traffic from an arbitrary point A <-> B on that map though… have at it.

                  This all started with:

                  you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

                  And that’s absolutely true … depending on your location, you will travel an unnecessary distance to get to your destination … because there just aren’t wires connecting A <-> B. Just like a GPS will take you on a non-direct path to your destination because there’s not a road directly to it.

                  A very simple example where the current topology results in routing all the way out to Seattle only to backtrack: https://geotraceroute.com/?node=0&amp;host=umt.edu#

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    The speed of light, so 50ms or so assuming locations on Earth. In practice a bit more because you have to go around it rather than through the core. Servers already have to make retroactive calls, which is why it looks like you hit but then you didn’t sometimes.

    Interestingly enough, Starlink has lower latency than wire despite the longer path because light travels slower than c through glass fiber.

        • Dark Arc@social.packetloss.gg
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 months ago

          But this is the limit of cloud gaming. Cloud gaming in no design goes all the way around the earth, or half way around the earth. Stadia used regional data centers, as does GeforceNOW, as does Shadow.

          50ms seems really arbitrary.

          • rasensprenger@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            9 months ago

            I also think 50ms is a bit pessimistic, but there are locations which are far off of googles datacenters (at least until they finish their Johannesburg location, south africa seems very isolated) and you’re never directly connected via as-the-bird-flies fibre connections, actual path length will be longer than just drawing a line on a map.

            This can all be mitigated by just building more and closer edge servers, of course, but at some point you just have a computer in your room again.

  • Tak@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    I feel a lot of the responses here are talking about cloud gaming not game streaming.

    Game streaming needs to be easier to do for it to become more popular. There’s a bunch of half baked solutions through different hardware and software when you could just physically move the hardware running the game in most cases.

    Cloud gaming is a hard sell when the cost to play most games on your own hardware is really fucking cheap compared to most media. Like the QWERTY keyboard people will do the traditional thing because they aren’t forced to change and it’s good enough.

    • andrew@radiation.party
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Steams solution is about as simple as it gets. Install steam on both devices (or the steam link app/ a physical steam link box), pair controller, log in, hit play.

      • Tak@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        You’re right and I guess I’m trying to say that it’s not as simple as: Turn on console/computer then launch game

        Plus we’re not discussing the intricacies of the game not being on steam or consoles. (we could argue it’s easier in some ways too with the steam deck)

        • CleoTheWizard@beehaw.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I hear ya. It seems like a large portion of what currently stops game streaming is the internet part of the equation. Even in home streaming doesn’t like to be passed through a router and gets slowed down by that somewhat.