• Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      Naw, USB-A is much more secure. I plug that end into my power bank, throw it in a bag or my pocket, and it’ll disconnect maybe 1 time out of the 100 that the USB-C or Lightning end does. It is a little larger, though.

    • cheery_coffee@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      I hate USB-C because until now the standard didn’t require any markings and the standards themselves are hot garbage.

      Go ahead, pull out a USB-C cable from your drawer and tell me what it does. I bet you instantly know which cable is VGA, HDMI, DisplayPort, FireWire, or serial, but you’ll never know for sure what your USB-C cable supports.

      You got reversibility but at what price?

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      24
      ·
      1 year ago

      I just wish they didn’t come with chips inside our cables.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        1 year ago

        You need that for power regulation. One of the reasons that you can use a USB-C lead with anything is because all of the devices that require different power will just tell the cable that and the chip inside the cable deals with it. Otherwise there would have to be different cables for different voltage requirements.

        • hcbxzz@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          edit-2
          1 year ago

          You can do cable detection with just a few resistors. Why make everyone use active cables just for basic functionality? Aside from exceptional rare circumstances, consumer grade cables should be passive devices IMO.

          • anotherandrew@lemmy.mixdown.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            They don’t use cable ICs for basic power use. The IC in the cable (different ICs for different capabilities) is used for high power negotiation (ie the cheap thin cable won’t be able to do 100W, and the lack of chip ensures this safety requirement) and also for active equalization do you can get 40Gbps.

            It’s a good thing, and cheap cables don’t need it at all. The system falls back safely.

            • hcbxzz@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Pull up resistors have solved the same problem much more simply for decades. Even with ICs, manufacturers can still make weak cables that lie about their capacity then burst into flames. The IC is not what making the cable safe, it’s the manufacturer. And if all else fails, the host can still directly measure cable resistance with some help from the client.

              • anotherandrew@lemmy.mixdown.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I mentioned this in another post, but yes, resistor dividers are useful and have been used for ages. However things like component aging/damage and simply having enough headroom between different options limits the number of discrete states you can convey with a resistor divider.

                I’m usually not a fan of overcomplicated solutions, but these identity chips aren’t that.

        • TWeaK@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          You don’t need it though. The power regulation is a decision between the load and the supply devices, the cable is an unnecessary third party. The cable should just be a multicore connection between two things, not a third device.

          If I had to go out on a limb though, I’d say it’s because manufacturers were selling cheap cables that didn’t meet the specification, and people were using them with higher power devices, causing overheating. By including a chip in the spec for the cable, you can push some of the responsibility back towards the cable manufacturer, and they can limit the maximum current to whatever they’ve designed to. In which case, we already do have different cables for different voltages - if your cable isn’t rated for 100W, then it might force a lower power even if your device and charger can do 100W. However it would be better if cable manufacturers would just meet the basic design specification to begin with, rather than creating unnecessary overhead.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            It doesn’t make any difference either it’s between the supply and the device or it’s between the cable and the device it’s still two devices.

            By pushing the responsibility onto the cable it allows you to operate the cable directly from a USB port. So you can have things like electrical sockets with USB connections and you don’t have to have chips in the sockets, because typically they’re just dumb electrical interfaces. It also means that the device delivering the power doesn’t have to be actually fully switched on, so you can recharge your phone from a USB port on your computer and you don’t have to power the computer on. As long as there is an open electrical channel to the port the cable will deal with it all itself.

            Also it’s more efficient because you would have to have a control circuit in every single power delivery device, but this way you can have it in just the one cable, so now it is one chip for an unlimited number of power delivery devices.

            • TWeaK@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              So you can have things like electrical sockets with USB connections and you don’t have to have chips in the sockets, because typically they’re just dumb electrical interfaces.

              If the supply is dumb and cannot negotiate power, then there is no need to negotiate power and it will fall back on regular 5V USB. The same if the load is dumb. In this case, there is no need for a cable chip.

              It also means that the device delivering the power doesn’t have to be actually fully switched on, so you can recharge your phone from a USB port on your computer and you don’t have to power the computer on.

              If the USB port has power to it, the computer is supplying it. The voltage would be on but open circuit. The computer would not have to supply the negotiation circuitry until a cable has been connected end to end and the voltage circuit is closed.

              You’re trying to present this as the cable replacing one of the devices, but it doesn’t, it’s an extra 3rd device in the negotiation. All 3 devices must permit a certain charging level for that level to be used. It may have some benefit in ensuring that cable load capacity isn’t exceeded, but like I say it would be far better if the cables were reliably manufactured properly to handle the specified loads.

          • anotherandrew@lemmy.mixdown.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            The cable has to carry the negotiated power safely. It’s not unnecessary, it’s absolutely critical. I’ve personally seen and diagnosed the result of when this fails.

            For your low power applications there is no need and the spec allows for that.

            • TWeaK@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              It wouldn’t be critical if the cables were suitably rated for the specification. If you put a 0.5A cable in a 3A circuit, you’re gonna have a bad time. If you use a 3A or better cable, then you don’t need a cable chip to tell the actual devices to only work at 0.5A.

              • anotherandrew@lemmy.mixdown.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                How do you have the cable correctly identify itself if you don’t put some smarts in it? Or are you saying we should only be able to buy expensive cables fully rated for 100W (or higher as the spec has been updated) — and how do you prevent an older cable rated for 100W from being abused in a newer 200W circuit?

                Divider resistors are okay, but the IC is a better choice for future proofing and reliability.

      • Bobby Turkalino@lemmy.yachts
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        1 year ago

        A chip can literally just contain basic logic gates. Your aversion to them is based on pure Qanon fiction

        • TWeaK@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          My aversion to them is an aversion to unnecessary overhead. A cable is a cable, it shouldn’t be a third device.

        • anotherandrew@lemmy.mixdown.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          No, the chip is a microcontroller with firmware. You can try to do it in pure logic but it’s a waste of effort and development resources.