Message ID | 1654507822-168026-1-git-send-email-john.garry@huawei.com |
---|---|
Headers | show |
Series | DMA mapping changes for SCSI core | expand |
On 6/6/22 02:30, John Garry wrote: > As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching > limit may see a big performance hit. > > This series introduces a new DMA mapping API, dma_opt_mapping_size(), so > that drivers may know this limit when performance is a factor in the > mapping. > > Robin didn't like using dma_max_mapping_size() for this [1]. > > The SCSI core code is modified to use this limit. > > I also added a patch for libata-scsi as it does not currently honour the > shost max_sectors limit. > > Note: Christoph has previously kindly offered to take this series via the > dma-mapping tree, so I think that we just need an ack from the > IOMMU guys now. > > [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/ > [1] https://lore.kernel.org/linux-iommu/f5b78c9c-312e-70ab-ecbb-f14623a4b6e3@arm.com/ Regarding [0], that patch reverts commit 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search fails"). Reading the description of that patch, it seems to me that the iova allocator can be improved. Shouldn't the iova allocator be improved such that we don't need this patch series? There are algorithms that handle fragmentation much better than the current iova allocator algorithm, e.g. the https://en.wikipedia.org/wiki/Buddy_memory_allocation algorithm. Thanks, Bart.
On 07/06/2022 23:43, Bart Van Assche wrote: > On 6/6/22 02:30, John Garry wrote: >> As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA >> caching >> limit may see a big performance hit. >> >> This series introduces a new DMA mapping API, dma_opt_mapping_size(), so >> that drivers may know this limit when performance is a factor in the >> mapping. >> >> Robin didn't like using dma_max_mapping_size() for this [1]. >> >> The SCSI core code is modified to use this limit. >> >> I also added a patch for libata-scsi as it does not currently honour the >> shost max_sectors limit. >> >> Note: Christoph has previously kindly offered to take this series via the >> dma-mapping tree, so I think that we just need an ack from the >> IOMMU guys now. >> >> [0] >> https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/ >> >> [1] >> https://lore.kernel.org/linux-iommu/f5b78c9c-312e-70ab-ecbb-f14623a4b6e3@arm.com/ >> > > Regarding [0], that patch reverts commit 4e89dce72521 ("iommu/iova: > Retry from last rb tree node if iova search fails"). Reading the > description of that patch, it seems to me that the iova allocator can be > improved. Shouldn't the iova allocator be improved such that we don't > need this patch series? There are algorithms that handle fragmentation > much better than the current iova allocator algorithm, e.g. the > https://en.wikipedia.org/wiki/Buddy_memory_allocation algorithm. Regardless of whether the IOVA allocator can be improved - which it probably can be - this series is still useful. That is due to the IOVA rcache - that is a cache of pre-allocated IOVAs which can be quickly used in the DMA mapping. The rache contains IOVAs up to certain fixed size. In this series we limit the DMA mapping length to the rcache size upper limit to always bypass the allocator (when we have a cached IOVA available) - see alloc_iova_fast(). Even if the IOVA allocator were greatly optimised for speed, there would still be an overhead in the alloc and free for those larger IOVAs which would outweigh the advantage of having larger DMA mappings. But is there even an advantage in very large streaming DMA mappings? Maybe for iotlb efficiency. But some say it's better to have the DMA engine start processing the data ASAP and not wait for larger lists to be built. Thanks, John