Message ID | 150fc7ab1c7f9b70a95dae1f4bc3b9018c0f9e04.1623981933.git.saiprakash.ranjan@codeaurora.org |
---|---|
State | New |
Headers | show |
Series | iommu/io-pgtable: Optimize partial walk flush for large scatter-gather list | expand |
Hi, On Thu, Jun 17, 2021 at 7:51 PM Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> wrote: > > Currently for iommu_unmap() of large scatter-gather list with page size > elements, the majority of time is spent in flushing of partial walks in > __arm_lpae_unmap() which is a VA based TLB invalidation invalidating > page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support > range based invalidations like on arm-smmu-v3.2. > > For example: to unmap a 32MB scatter-gather list with page size elements > (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB > for 4K granule) and each of 2MB will further result in 512 TLBIVAs (2MB/4K) > resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge > overhead. > > So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to invalidate > the entire context for partial walk flush on select few platforms where > cost of over-invalidation is less than unmap latency It would probably be worth punching this description up a little bit. Elsewhere you said in more detail why this over-invalidation is less of a big deal for the Qualcomm SMMU. It's probably worth saying something like that here, too. Like this bit paraphrased from your other email: On qcom impl, we have several performance improvements for TLB cache invalidations in HW like wait-for-safe (for realtime clients such as camera and display) and few others to allow for cache lookups/updates when TLBI is in progress for the same context bank. > using the newly > introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for > non-strict mode given its all about over-invalidation saving time on > individual unmaps and non-deterministic generally. As per usual I'm mostly clueless, but I don't quite understand why you want this new behavior for non-strict mode. To me it almost seems like the opposite? Specifically, non-strict mode is already outside the critical path today and so there's no need to optimize it. I'm probably not explaining myself clearly, but I guess i'm thinking: a) today for strict, unmap is in the critical path and it's important to get it out of there. Getting it out of the critical path is so important that we're willing to over-invalidate to speed up the critical path. b) today for non-strict, unmap is not in the critical path. So I would almost expect your patch to _disable_ your new feature for non-strict mappings, not auto-enable your new feature for non-strict mappings. If I'm babbling, feel free to ignore. ;-) Looking back, I guess Robin was the one that suggested the behavior you're implementing, so it's more likely he's right than I am. ;-) -Doug
Hi, On 2021-06-19 03:39, Doug Anderson wrote: > Hi, > > On Thu, Jun 17, 2021 at 7:51 PM Sai Prakash Ranjan > <saiprakash.ranjan@codeaurora.org> wrote: >> >> Currently for iommu_unmap() of large scatter-gather list with page >> size >> elements, the majority of time is spent in flushing of partial walks >> in >> __arm_lpae_unmap() which is a VA based TLB invalidation invalidating >> page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support >> range based invalidations like on arm-smmu-v3.2. >> >> For example: to unmap a 32MB scatter-gather list with page size >> elements >> (8192 entries), there are 16->2MB buffer unmaps based on the pgsize >> (2MB >> for 4K granule) and each of 2MB will further result in 512 TLBIVAs >> (2MB/4K) >> resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a >> huge >> overhead. >> >> So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to >> invalidate >> the entire context for partial walk flush on select few platforms >> where >> cost of over-invalidation is less than unmap latency > > It would probably be worth punching this description up a little bit. > Elsewhere you said in more detail why this over-invalidation is less > of a big deal for the Qualcomm SMMU. It's probably worth saying > something like that here, too. Like this bit paraphrased from your > other email: > > On qcom impl, we have several performance improvements for TLB cache > invalidations in HW like wait-for-safe (for realtime clients such as > camera and display) and few others to allow for cache lookups/updates > when TLBI is in progress for the same context bank. > Sure will add this info as well in the next version. > >> using the newly >> introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for >> non-strict mode given its all about over-invalidation saving time on >> individual unmaps and non-deterministic generally. > > As per usual I'm mostly clueless, but I don't quite understand why you > want this new behavior for non-strict mode. To me it almost seems like > the opposite? Specifically, non-strict mode is already outside the > critical path today and so there's no need to optimize it. I'm > probably not explaining myself clearly, but I guess i'm thinking: > > a) today for strict, unmap is in the critical path and it's important > to get it out of there. Getting it out of the critical path is so > important that we're willing to over-invalidate to speed up the > critical path. > > b) today for non-strict, unmap is not in the critical path. > > So I would almost expect your patch to _disable_ your new feature for > non-strict mappings, not auto-enable your new feature for non-strict > mappings. > > If I'm babbling, feel free to ignore. ;-) Looking back, I guess Robin > was the one that suggested the behavior you're implementing, so it's > more likely he's right than I am. ;-) > Thanks for taking a look. Non-strict mode is only for leaf entries and dma domains and this optimization is for non-leaf entries and is applicable for both, see __arm_lpae_unmap(). In other words, if you have iommu.strict=0 (non-strict mode) and try unmapping a large sg buffer as the problem described in the commit text, you would still go via this path in unmap and see the delay without this patch. So what Robin suggested is that, let's do this unconditionally for all users with non-strict mode as opposed to only restricting it to implementation specific in case of strict mode. Thanks, Sai -- QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
On 2021-06-21 06:47, Sai Prakash Ranjan wrote: > Hi, > > On 2021-06-19 03:39, Doug Anderson wrote: >> Hi, >> >> On Thu, Jun 17, 2021 at 7:51 PM Sai Prakash Ranjan >> <saiprakash.ranjan@codeaurora.org> wrote: >>> >>> Currently for iommu_unmap() of large scatter-gather list with page size >>> elements, the majority of time is spent in flushing of partial walks in >>> __arm_lpae_unmap() which is a VA based TLB invalidation invalidating >>> page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support >>> range based invalidations like on arm-smmu-v3.2. >>> >>> For example: to unmap a 32MB scatter-gather list with page size elements >>> (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB >>> for 4K granule) and each of 2MB will further result in 512 TLBIVAs >>> (2MB/4K) >>> resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge >>> overhead. >>> >>> So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to invalidate >>> the entire context for partial walk flush on select few platforms where >>> cost of over-invalidation is less than unmap latency >> >> It would probably be worth punching this description up a little bit. >> Elsewhere you said in more detail why this over-invalidation is less >> of a big deal for the Qualcomm SMMU. It's probably worth saying >> something like that here, too. Like this bit paraphrased from your >> other email: >> >> On qcom impl, we have several performance improvements for TLB cache >> invalidations in HW like wait-for-safe (for realtime clients such as >> camera and display) and few others to allow for cache lookups/updates >> when TLBI is in progress for the same context bank. >> > > Sure will add this info as well in the next version. > >> >>> using the newly >>> introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for >>> non-strict mode given its all about over-invalidation saving time on >>> individual unmaps and non-deterministic generally. >> >> As per usual I'm mostly clueless, but I don't quite understand why you >> want this new behavior for non-strict mode. To me it almost seems like >> the opposite? Specifically, non-strict mode is already outside the >> critical path today and so there's no need to optimize it. I'm >> probably not explaining myself clearly, but I guess i'm thinking: >> >> a) today for strict, unmap is in the critical path and it's important >> to get it out of there. Getting it out of the critical path is so >> important that we're willing to over-invalidate to speed up the >> critical path. >> >> b) today for non-strict, unmap is not in the critical path. >> >> So I would almost expect your patch to _disable_ your new feature for >> non-strict mappings, not auto-enable your new feature for non-strict >> mappings. >> >> If I'm babbling, feel free to ignore. ;-) Looking back, I guess Robin >> was the one that suggested the behavior you're implementing, so it's >> more likely he's right than I am. ;-) >> > > Thanks for taking a look. Non-strict mode is only for leaf entries and > dma domains and this optimization is for non-leaf entries and is applicable > for both, see __arm_lpae_unmap(). In other words, if you have > iommu.strict=0 > (non-strict mode) and try unmapping a large sg buffer as the problem > described > in the commit text, you would still go via this path in unmap and see the > delay without this patch. So what Robin suggested is that, let's do this > unconditionally for all users with non-strict mode as opposed to only > restricting it to implementation specific in case of strict mode. Right, unmapping tables works out as a bit of a compromise for non-strict mode - we don't use a freelist to defer the freeing of pagetable pages, so we rely on making non-leaf invalidations synchronously to knock out walk caches which may be pointing to the page beforte we free it. We might actually be able to get away without that for non-strict unmaps, since partial walks pointing at freed memory probably aren't too much more hazardous than the equivalent leaf TLB entries while the IOVA region is held in the flush queue, but it certainly does matter for maps when we're knocking out a (presumably empty) table entry to put down a new block whose IOVA will be immediately live. Robin.
diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 45441592a0e6..fd6b30cfdbf7 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -219,6 +219,12 @@ static inline void io_pgtable_tlb_flush_walk(struct io_pgtable *iop, unsigned long iova, size_t size, size_t granule) { + if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT || + iop->cfg.quirks & IO_PGTABLE_QUIRK_TLB_INV_ALL) { + iop->cfg.tlb->tlb_flush_all(iop->cookie); + return; + } + if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_walk) iop->cfg.tlb->tlb_flush_walk(iova, size, granule, iop->cookie); }
Currently for iommu_unmap() of large scatter-gather list with page size elements, the majority of time is spent in flushing of partial walks in __arm_lpae_unmap() which is a VA based TLB invalidation invalidating page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support range based invalidations like on arm-smmu-v3.2. For example: to unmap a 32MB scatter-gather list with page size elements (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB for 4K granule) and each of 2MB will further result in 512 TLBIVAs (2MB/4K) resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge overhead. So instead use tlb_flush_all() callback (TLBIALL/TLBIASID) to invalidate the entire context for partial walk flush on select few platforms where cost of over-invalidation is less than unmap latency using the newly introduced quirk IO_PGTABLE_QUIRK_TLB_INV_ALL. We also do this for non-strict mode given its all about over-invalidation saving time on individual unmaps and non-deterministic generally. For this example of 32MB scatter-gather list unmap, this results in just 16 ASID based TLB invalidations (TLBIASIDs) as opposed to 8192 TLBIVAs thereby increasing the performance of unmaps drastically. Test on QTI SM8150 SoC for 10 iterations of iommu_{map_sg}/unmap: (average over 10 iterations) Before this optimization: size iommu_map_sg iommu_unmap 4K 2.067 us 1.854 us 64K 9.598 us 8.802 us 1M 148.890 us 130.718 us 2M 305.864 us 67.291 us 12M 1793.604 us 390.838 us 16M 2386.848 us 518.187 us 24M 3563.296 us 775.989 us 32M 4747.171 us 1033.364 us After this optimization: size iommu_map_sg iommu_unmap 4K 1.723 us 1.765 us 64K 9.880 us 8.869 us 1M 155.364 us 135.223 us 2M 303.906 us 5.385 us 12M 1786.557 us 21.250 us 16M 2391.890 us 27.437 us 24M 3570.895 us 39.937 us 32M 4755.234 us 51.797 us This is further reduced once the map/unmap_pages() support gets in which will result in just 1 TLBIASID as compared to 16 TLBIASIDs. Real world data also shows big difference in unmap performance as below: There were reports of camera frame drops because of high overhead in iommu unmap without this optimization because of frequent unmaps issued by camera of about 100MB/s taking more than 100ms thereby causing frame drops. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> --- include/linux/io-pgtable.h | 6 ++++++ 1 file changed, 6 insertions(+)