Message ID | 20211125073733.74902-1-mika.westerberg@linux.intel.com |
---|---|
Headers | show |
Series | thunderbolt: Improvements for PM and USB4 compatibility | expand |
On Mon, Nov 29, 2021 at 8:30 AM Mika Westerberg <mika.westerberg@linux.intel.com> wrote: > > Hi, > > On Fri, Nov 26, 2021 at 09:01:50PM +0100, Lukas Wunner wrote: > > On Thu, Nov 25, 2021 at 10:37:29AM +0300, Mika Westerberg wrote: > > > If the boot firmware implements connection manager of its own it may not > > > create the paths in the same way or order we do. For example it may > > > create first PCIe tunnel and the USB3 tunnel. When we restore our the -> then? > > > tunnels (first de-activating them) we may be doing that over completely > > > different tunnel and that leaves them possible non-functional. For this tunnel -> tunnels? possible -> possibly? > > > reason we re-use the tunnel discovery functionality and find out all the > > > existing tunnels, and tear them down. Once that is done we can restore > > > our tunnels. > > > > Hm, what if the system is running from a Thunderbolt-attached drive? > > Will the mount points survive tearing down and re-establishing the > > tunnels to that drive? > > Yes, they should. PCI is waiting for the TBT to resume so it should not > notice this, and all the data is at point still synced out to the disks. But the user will notice the screen flashing, probably? Maybe we can continue using the already established tunnels after discovering them? Is this because the FW might not support the same set of functionality?
Hi, On Tue, Nov 30, 2021 at 08:25:40PM +0200, Yehezkel Bernat wrote: > On Mon, Nov 29, 2021 at 8:30 AM Mika Westerberg > <mika.westerberg@linux.intel.com> wrote: > > > > Hi, > > > > On Fri, Nov 26, 2021 at 09:01:50PM +0100, Lukas Wunner wrote: > > > On Thu, Nov 25, 2021 at 10:37:29AM +0300, Mika Westerberg wrote: > > > > If the boot firmware implements connection manager of its own it may not > > > > create the paths in the same way or order we do. For example it may > > > > create first PCIe tunnel and the USB3 tunnel. When we restore our > > the -> then? > > > > > tunnels (first de-activating them) we may be doing that over completely > > > > different tunnel and that leaves them possible non-functional. For this > > tunnel -> tunnels? possible -> possibly? Indeed, I'll fix those :) > > > > > reason we re-use the tunnel discovery functionality and find out all the > > > > existing tunnels, and tear them down. Once that is done we can restore > > > > our tunnels. > > > > > > Hm, what if the system is running from a Thunderbolt-attached drive? > > > Will the mount points survive tearing down and re-establishing the > > > tunnels to that drive? > > > > Yes, they should. PCI is waiting for the TBT to resume so it should not > > notice this, and all the data is at point still synced out to the disks. > > But the user will notice the screen flashing, probably? They will notice flashing anyway because we jump from one kernel to another (as this is suspend-to-disk which involves shutting down the machine and booting to "fresh" resume kernel first). We actually tear down all the DP tunnels before we even enter suspend-to-disk (see 81a2e3e49f1f ("thunderbolt: Tear down DP tunnels when suspending")). > Maybe we can continue using the already established tunnels after > discovering them? Yes we could but that would require us to map the existing tunnels with the ones we had prior, and also indentify any new tunnels or missing ones. This makes it more complex, and the approach here seem to work according to my testing :) I can look for that solution too if you think it is necessary. > Is this because the FW might not support the same set of functionality? Yes, that too.
On Wed, Dec 1, 2021 at 8:47 AM Mika Westerberg <mika.westerberg@linux.intel.com> wrote: > > Hi, > > > > > > reason we re-use the tunnel discovery functionality and find out all the > > > > > existing tunnels, and tear them down. Once that is done we can restore > > > > > our tunnels. > > > > > > > > Hm, what if the system is running from a Thunderbolt-attached drive? > > > > Will the mount points survive tearing down and re-establishing the > > > > tunnels to that drive? > > > > > > Yes, they should. PCI is waiting for the TBT to resume so it should not > > > notice this, and all the data is at point still synced out to the disks. > > > > But the user will notice the screen flashing, probably? > > They will notice flashing anyway because we jump from one kernel to > another (as this is suspend-to-disk which involves shutting down the > machine and booting to "fresh" resume kernel first). We actually tear > down all the DP tunnels before we even enter suspend-to-disk (see > 81a2e3e49f1f ("thunderbolt: Tear down DP tunnels when suspending")). > Ah, thanks. > > Is this because the FW might not support the same set of functionality? > > Yes, that too. OK > > Maybe we can continue using the already established tunnels after > > discovering them? > > Yes we could but that would require us to map the existing tunnels with > the ones we had prior, and also indentify any new tunnels or missing > ones. This makes it more complex, and the approach here seem to work > according to my testing :) I can look for that solution too if you think > it is necessary. Following the previous points, I don't see any value in trying the more complex solution. Thanks!
On Thu, Nov 25, 2021 at 10:37:27AM +0300, Mika Westerberg wrote: > Hi all, > > This series consists of improvements around power management and USB4 > compatibility. We also add debug logging for the DisplayPort resource > allocation. > > Mika Westerberg (6): > thunderbolt: Runtime PM activate both ends of the device link > thunderbolt: Tear down existing tunnels when resuming from hibernate > thunderbolt: Runtime resume USB4 port when retimers are scanned > thunderbolt: Do not allow subtracting more NFC credits than configured > thunderbolt: Do not program path HopIDs for USB4 routers > thunderbolt: Add debug logging of DisplayPort resource allocation Fixed the typos pointed out by Yehezkel and applied to thunderbolt.git/next.