Message ID | 20231221153948.119007-1-yi.l.liu@intel.com |
---|---|
Headers | show |
Series | Add iommufd nesting (part 2/2) | expand |
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, December 21, 2023 11:40 PM > > From: Lu Baolu <baolu.lu@linux.intel.com> > > The updates of the PTEs in the nested page table will be propagated to the > hardware caches on both IOMMU (IOTLB) and devices (DevTLB/ATC). this is incorrect. the scope of this cmd is driver specific. > > Add a new domain op cache_invalidate_user for the userspace to flush the > hardware caches for a nested domain through iommufd. No wrapper for it, > as it's only supposed to be used by iommufd. Then, pass in invalidation > requests in form of a user data array conatining a number of invalidation > data entries. > > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > Reviewed-by: Kevin Tian <kevin.tian@intel.com> > Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > --- > include/linux/iommu.h | 27 +++++++++++++++++++++++++++ > 1 file changed, 27 insertions(+) > > diff --git a/include/linux/iommu.h b/include/linux/iommu.h > index 6291aa7b079b..5c4a17f13761 100644 > --- a/include/linux/iommu.h > +++ b/include/linux/iommu.h > @@ -284,6 +284,24 @@ struct iommu_user_data { > size_t len; > }; > > +/** > + * struct iommu_user_data_array - iommu driver specific user space data > array > + * @type: The data type of all the entries in the user buffer array > + * @uptr: Pointer to the user buffer array for copy_from_user() remove 'for copy_from_user(); > + * @entry_len: The fixed-width length of a entry in the array, in bytes s/a/an/ > + * @entry_num: The number of total entries in the array > + * > + * A array having a @entry_num number of @entry_len sized entries, each the first sentence is redundant. > entry is > + * user space data, an uAPI defined in include/uapi/linux/iommufd.h where > @type > + * is also defined as enum iommu_xyz_data_type. I'd just say: "The user buffer includes an array of requests with format defined in include/uapi/linux/iommufd.h"
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, December 21, 2023 11:40 PM > + > + > + if ((inv.flags & IOMMU_TEST_INVALIDATE_FLAG_ALL) && > + (inv.flags & > IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR)) { > + rc = -EINVAL; > + break; > + } > + a nit. is there a reason why the two flags can not be set together? in concept a mock iommu error could occur in either invalidate-one or invalidate-all. otherwise, Reviewed-by: Kevin Tian <kevin.tian@intel.com>
On 12/21/2023 11:39 PM, Yi Liu wrote: > From: Lu Baolu <baolu.lu@linux.intel.com> > > This implements the .cache_invalidate_user() callback to support iotlb > flush for nested domain. > > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> > Co-developed-by: Yi Liu <yi.l.liu@intel.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> > --- > drivers/iommu/intel/nested.c | 116 +++++++++++++++++++++++++++++++++++ > 1 file changed, 116 insertions(+) > > diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c > index b5a5563ab32c..c665e2647045 100644 > --- a/drivers/iommu/intel/nested.c > +++ b/drivers/iommu/intel/nested.c > @@ -73,9 +73,125 @@ static void intel_nested_domain_free(struct iommu_domain *domain) > kfree(to_dmar_domain(domain)); > } > > +static void nested_flush_pasid_iotlb(struct intel_iommu *iommu, > + struct dmar_domain *domain, u64 addr, > + unsigned long npages, bool ih) > +{ > + u16 did = domain_id_iommu(domain, iommu); > + unsigned long flags; > + > + spin_lock_irqsave(&domain->lock, flags); > + if (!list_empty(&domain->devices)) > + qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, > + npages, ih, NULL); > + spin_unlock_irqrestore(&domain->lock, flags); > +} > + > +static void nested_flush_dev_iotlb(struct dmar_domain *domain, u64 addr, > + unsigned mask, u32 *fault) > +{ > + struct device_domain_info *info; > + unsigned long flags; > + u16 sid, qdep; > + > + spin_lock_irqsave(&domain->lock, flags); > + list_for_each_entry(info, &domain->devices, link) { > + if (!info->ats_enabled) > + continue; > + sid = info->bus << 8 | info->devfn; > + qdep = info->ats_qdep; > + qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, > + qdep, addr, mask, fault); > + quirk_extra_dev_tlb_flush(info, addr, mask, > + IOMMU_NO_PASID, qdep); > + } > + spin_unlock_irqrestore(&domain->lock, flags); > +} > + > +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 addr, > + unsigned long npages, u32 *error) > +{ > + struct iommu_domain_info *info; > + unsigned long i; > + unsigned mask; > + u32 fault = 0; > + > + if (npages == U64_MAX) > + mask = 64 - VTD_PAGE_SHIFT; > + else > + mask = ilog2(__roundup_pow_of_two(npages)); > + > + xa_for_each(&domain->iommu_array, i, info) { > + nested_flush_pasid_iotlb(info->iommu, domain, addr, npages, 0); > + > + if (domain->has_iotlb_device) > + continue; Shouldn't this be if (!domain->has_iotlb_device)? > + > + nested_flush_dev_iotlb(domain, addr, mask, &fault); > + if (fault & (DMA_FSTS_ITE | DMA_FSTS_ICE)) > + break; > + } > + > + if (fault & DMA_FSTS_ICE) > + *error |= IOMMU_HWPT_INVALIDATE_VTD_S1_ICE; > + if (fault & DMA_FSTS_ITE) > + *error |= IOMMU_HWPT_INVALIDATE_VTD_S1_ITE; > +} > + > +static int intel_nested_cache_invalidate_user(struct iommu_domain *domain, > + struct iommu_user_data_array *array) > +{ > + struct dmar_domain *dmar_domain = to_dmar_domain(domain); > + struct iommu_hwpt_vtd_s1_invalidate inv_entry; > + u32 processed = 0; > + int ret = 0; > + u32 index; > + > + if (array->type != IOMMU_HWPT_INVALIDATE_DATA_VTD_S1) { > + ret = -EINVAL; > + goto out; > + } > + > + for (index = 0; index < array->entry_num; index++) { > + ret = iommu_copy_struct_from_user_array(&inv_entry, array, > + IOMMU_HWPT_INVALIDATE_DATA_VTD_S1, > + index, inv_error); > + if (ret) > + break; > + > + if (inv_entry.flags & ~IOMMU_VTD_INV_FLAGS_LEAF) { > + ret = -EOPNOTSUPP; > + break; > + } > + > + if (!IS_ALIGNED(inv_entry.addr, VTD_PAGE_SIZE) || > + ((inv_entry.npages == U64_MAX) && inv_entry.addr)) { > + ret = -EINVAL; > + break; > + } > + > + inv_entry.inv_error = 0; > + intel_nested_flush_cache(dmar_domain, inv_entry.addr, > + inv_entry.npages, &inv_entry.inv_error); > + > + ret = iommu_respond_struct_to_user_array(array, index, > + (void *)&inv_entry, > + sizeof(inv_entry)); > + if (ret) > + break; > + > + processed++; > + } > + > +out: > + array->entry_num = processed; > + return ret; > +} > + > static const struct iommu_domain_ops intel_nested_domain_ops = { > .attach_dev = intel_nested_attach_dev, > .free = intel_nested_domain_free, > + .cache_invalidate_user = intel_nested_cache_invalidate_user, > }; > > struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent,
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, December 21, 2023 11:40 PM > > From: Nicolin Chen <nicolinc@nvidia.com> > > Allow to test whether IOTLB has been invalidated or not. > > Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> > Signed-off-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, December 21, 2023 11:40 PM > > + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; > + if (fault) { > + if (fsts) > + *fsts |= fault; do we expect the fault to be accumulated? otherwise it's clearer to just do direct assignment instead of asking for the caller to clear the variable before invocation. the rest looks good: Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> From: Yang, Weijiang <weijiang.yang@intel.com> > Sent: Friday, December 22, 2023 11:56 AM > > + > > + xa_for_each(&domain->iommu_array, i, info) { > > + nested_flush_pasid_iotlb(info->iommu, domain, addr, > npages, 0); > > + > > + if (domain->has_iotlb_device) > > + continue; > > Shouldn't this be if (!domain->has_iotlb_device)? yes that is wrong. actually it's weird to put domain check in a loop of domain->iommu_array. that check along with devtlb flush should be done out of that loop.
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Thursday, December 21, 2023 11:40 PM > > + > +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 > addr, > + unsigned long npages, u32 *error) > +{ > + struct iommu_domain_info *info; > + unsigned long i; > + unsigned mask; > + u32 fault = 0; > + > + if (npages == U64_MAX) > + mask = 64 - VTD_PAGE_SHIFT; > + else > + mask = ilog2(__roundup_pow_of_two(npages)); > + > + xa_for_each(&domain->iommu_array, i, info) { > + nested_flush_pasid_iotlb(info->iommu, domain, addr, > npages, 0); so IOMMU_VTD_INV_FLAGS_LEAF is defined but ignored? > + > + if (domain->has_iotlb_device) > + continue; > + > + nested_flush_dev_iotlb(domain, addr, mask, &fault); > + if (fault & (DMA_FSTS_ITE | DMA_FSTS_ICE)) > + break; here you may add a note that we don't plan to forward invalidation queue error (i.e. IQE) to the caller as it's caused only by driver internal bug. > + > + if (!IS_ALIGNED(inv_entry.addr, VTD_PAGE_SIZE) || > + ((inv_entry.npages == U64_MAX) && inv_entry.addr)) { > + ret = -EINVAL; > + break; > + } > + why is [non-zero-addr, U64_MAX] an error? Is it explicitly stated to be not supported by underlying helpers?
> On Dec 22, 2023, at 11:56, Yang, Weijiang <weijiang.yang@intel.com> wrote: > > On 12/21/2023 11:39 PM, Yi Liu wrote: >> From: Lu Baolu <baolu.lu@linux.intel.com> >> >> This implements the .cache_invalidate_user() callback to support iotlb >> flush for nested domain. >> >> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> >> Co-developed-by: Yi Liu <yi.l.liu@intel.com> >> Signed-off-by: Yi Liu <yi.l.liu@intel.com> >> --- >> drivers/iommu/intel/nested.c | 116 +++++++++++++++++++++++++++++++++++ >> 1 file changed, 116 insertions(+) >> >> diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c >> index b5a5563ab32c..c665e2647045 100644 >> --- a/drivers/iommu/intel/nested.c >> +++ b/drivers/iommu/intel/nested.c >> @@ -73,9 +73,125 @@ static void intel_nested_domain_free(struct iommu_domain *domain) >> kfree(to_dmar_domain(domain)); >> } >> +static void nested_flush_pasid_iotlb(struct intel_iommu *iommu, >> + struct dmar_domain *domain, u64 addr, >> + unsigned long npages, bool ih) >> +{ >> + u16 did = domain_id_iommu(domain, iommu); >> + unsigned long flags; >> + >> + spin_lock_irqsave(&domain->lock, flags); >> + if (!list_empty(&domain->devices)) >> + qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, >> + npages, ih, NULL); >> + spin_unlock_irqrestore(&domain->lock, flags); >> +} >> + >> +static void nested_flush_dev_iotlb(struct dmar_domain *domain, u64 addr, >> + unsigned mask, u32 *fault) >> +{ >> + struct device_domain_info *info; >> + unsigned long flags; >> + u16 sid, qdep; >> + >> + spin_lock_irqsave(&domain->lock, flags); >> + list_for_each_entry(info, &domain->devices, link) { >> + if (!info->ats_enabled) >> + continue; >> + sid = info->bus << 8 | info->devfn; >> + qdep = info->ats_qdep; >> + qi_flush_dev_iotlb(info->iommu, sid, info->pfsid, >> + qdep, addr, mask, fault); >> + quirk_extra_dev_tlb_flush(info, addr, mask, >> + IOMMU_NO_PASID, qdep); >> + } >> + spin_unlock_irqrestore(&domain->lock, flags); >> +} >> + >> +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 addr, >> + unsigned long npages, u32 *error) >> +{ >> + struct iommu_domain_info *info; >> + unsigned long i; >> + unsigned mask; >> + u32 fault = 0; >> + >> + if (npages == U64_MAX) >> + mask = 64 - VTD_PAGE_SHIFT; >> + else >> + mask = ilog2(__roundup_pow_of_two(npages)); >> + >> + xa_for_each(&domain->iommu_array, i, info) { >> + nested_flush_pasid_iotlb(info->iommu, domain, addr, npages, 0); >> + >> + if (domain->has_iotlb_device) >> + continue; > > Shouldn't this be if (!domain->has_iotlb_device)? oops, yes it is. >> + >> + nested_flush_dev_iotlb(domain, addr, mask, &fault); >> + if (fault & (DMA_FSTS_ITE | DMA_FSTS_ICE)) >> + break; >> + } >> + >> + if (fault & DMA_FSTS_ICE) >> + *error |= IOMMU_HWPT_INVALIDATE_VTD_S1_ICE; >> + if (fault & DMA_FSTS_ITE) >> + *error |= IOMMU_HWPT_INVALIDATE_VTD_S1_ITE; >> +} >> + >> +static int intel_nested_cache_invalidate_user(struct iommu_domain *domain, >> + struct iommu_user_data_array *array) >> +{ >> + struct dmar_domain *dmar_domain = to_dmar_domain(domain); >> + struct iommu_hwpt_vtd_s1_invalidate inv_entry; >> + u32 processed = 0; >> + int ret = 0; >> + u32 index; >> + >> + if (array->type != IOMMU_HWPT_INVALIDATE_DATA_VTD_S1) { >> + ret = -EINVAL; >> + goto out; >> + } >> + >> + for (index = 0; index < array->entry_num; index++) { >> + ret = iommu_copy_struct_from_user_array(&inv_entry, array, >> + IOMMU_HWPT_INVALIDATE_DATA_VTD_S1, >> + index, inv_error); >> + if (ret) >> + break; >> + >> + if (inv_entry.flags & ~IOMMU_VTD_INV_FLAGS_LEAF) { >> + ret = -EOPNOTSUPP; >> + break; >> + } >> + >> + if (!IS_ALIGNED(inv_entry.addr, VTD_PAGE_SIZE) || >> + ((inv_entry.npages == U64_MAX) && inv_entry.addr)) { >> + ret = -EINVAL; >> + break; >> + } >> + >> + inv_entry.inv_error = 0; >> + intel_nested_flush_cache(dmar_domain, inv_entry.addr, >> + inv_entry.npages, &inv_entry.inv_error); >> + >> + ret = iommu_respond_struct_to_user_array(array, index, >> + (void *)&inv_entry, >> + sizeof(inv_entry)); >> + if (ret) >> + break; >> + >> + processed++; >> + } >> + >> +out: >> + array->entry_num = processed; >> + return ret; >> +} >> + >> static const struct iommu_domain_ops intel_nested_domain_ops = { >> .attach_dev = intel_nested_attach_dev, >> .free = intel_nested_domain_free, >> + .cache_invalidate_user = intel_nested_cache_invalidate_user, >> }; >> struct iommu_domain *intel_nested_domain_alloc(struct iommu_domain *parent, >
> On Dec 22, 2023, at 14:47, Tian, Kevin <kevin.tian@intel.com> wrote: > > >> >> From: Yang, Weijiang <weijiang.yang@intel.com> >> Sent: Friday, December 22, 2023 11:56 AM >>> + >>> + xa_for_each(&domain->iommu_array, i, info) { >>> + nested_flush_pasid_iotlb(info->iommu, domain, addr, >> npages, 0); >>> + >>> + if (domain->has_iotlb_device) >>> + continue; >> >> Shouldn't this be if (!domain->has_iotlb_device)? > > yes that is wrong. > > actually it's weird to put domain check in a loop of domain->iommu_array. > > that check along with devtlb flush should be done out of that loop. Maybe adding a bool, set it out of the loop, check the bool in the loop.
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Friday, December 22, 2023 3:02 PM > > > > On Dec 22, 2023, at 14:47, Tian, Kevin <kevin.tian@intel.com> wrote: > > > > > >> > >> From: Yang, Weijiang <weijiang.yang@intel.com> > >> Sent: Friday, December 22, 2023 11:56 AM > >>> + > >>> + xa_for_each(&domain->iommu_array, i, info) { > >>> + nested_flush_pasid_iotlb(info->iommu, domain, addr, > >> npages, 0); > >>> + > >>> + if (domain->has_iotlb_device) > >>> + continue; > >> > >> Shouldn't this be if (!domain->has_iotlb_device)? > > > > yes that is wrong. > > > > actually it's weird to put domain check in a loop of domain->iommu_array. > > > > that check along with devtlb flush should be done out of that loop. > > Maybe adding a bool, set it out of the loop, check the bool in the loop. the point is that dev iotlb doesn't rely on info->iommu: nested_flush_dev_iotlb(domain, addr, mask, &fault); then why do it in the loop of info->iommu?
> On Dec 22, 2023, at 15:12, Tian, Kevin <kevin.tian@intel.com> wrote: > > >> >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Friday, December 22, 2023 3:02 PM >> >> >>>> On Dec 22, 2023, at 14:47, Tian, Kevin <kevin.tian@intel.com> wrote: >>> >>> >>>> >>>> From: Yang, Weijiang <weijiang.yang@intel.com> >>>> Sent: Friday, December 22, 2023 11:56 AM >>>>> + >>>>> + xa_for_each(&domain->iommu_array, i, info) { >>>>> + nested_flush_pasid_iotlb(info->iommu, domain, addr, >>>> npages, 0); >>>>> + >>>>> + if (domain->has_iotlb_device) >>>>> + continue; >>>> >>>> Shouldn't this be if (!domain->has_iotlb_device)? >>> >>> yes that is wrong. >>> >>> actually it's weird to put domain check in a loop of domain->iommu_array. >>> >>> that check along with devtlb flush should be done out of that loop. >> >> Maybe adding a bool, set it out of the loop, check the bool in the loop. > > the point is that dev iotlb doesn't rely on info->iommu: > > nested_flush_dev_iotlb(domain, addr, mask, &fault); > > then why do it in the loop of info->iommu? yes. It should have another device loop instead.
On 2023/12/22 10:30, Tian, Kevin wrote: >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Thursday, December 21, 2023 11:40 PM >> >> From: Lu Baolu <baolu.lu@linux.intel.com> >> >> The updates of the PTEs in the nested page table will be propagated to the >> hardware caches on both IOMMU (IOTLB) and devices (DevTLB/ATC). > > this is incorrect. the scope of this cmd is driver specific. yes. May just say the hardware caches. > >> >> Add a new domain op cache_invalidate_user for the userspace to flush the >> hardware caches for a nested domain through iommufd. No wrapper for it, >> as it's only supposed to be used by iommufd. Then, pass in invalidation >> requests in form of a user data array conatining a number of invalidation >> data entries. >> >> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> >> Reviewed-by: Kevin Tian <kevin.tian@intel.com> >> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> >> Signed-off-by: Yi Liu <yi.l.liu@intel.com> >> --- >> include/linux/iommu.h | 27 +++++++++++++++++++++++++++ >> 1 file changed, 27 insertions(+) >> >> diff --git a/include/linux/iommu.h b/include/linux/iommu.h >> index 6291aa7b079b..5c4a17f13761 100644 >> --- a/include/linux/iommu.h >> +++ b/include/linux/iommu.h >> @@ -284,6 +284,24 @@ struct iommu_user_data { >> size_t len; >> }; >> >> +/** >> + * struct iommu_user_data_array - iommu driver specific user space data >> array >> + * @type: The data type of all the entries in the user buffer array >> + * @uptr: Pointer to the user buffer array for copy_from_user() > > remove 'for copy_from_user(); > >> + * @entry_len: The fixed-width length of a entry in the array, in bytes > > s/a/an/ > >> + * @entry_num: The number of total entries in the array >> + * >> + * A array having a @entry_num number of @entry_len sized entries, each > > the first sentence is redundant. > >> entry is >> + * user space data, an uAPI defined in include/uapi/linux/iommufd.h where >> @type >> + * is also defined as enum iommu_xyz_data_type. > > I'd just say: > > "The user buffer includes an array of requests with format defined > in include/uapi/linux/iommufd.h" sure.
On 2023/12/22 11:39, Tian, Kevin wrote: >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Thursday, December 21, 2023 11:40 PM >> + >> + >> + if ((inv.flags & IOMMU_TEST_INVALIDATE_FLAG_ALL) && >> + (inv.flags & >> IOMMU_TEST_INVALIDATE_FLAG_TRIGGER_ERROR)) { >> + rc = -EINVAL; >> + break; >> + } >> + > > a nit. is there a reason why the two flags can not be set together? > > in concept a mock iommu error could occur in either invalidate-one > or invalidate-all. I see. I'm ok to relax this check and remove the selftest case as well. > otherwise, > > Reviewed-by: Kevin Tian <kevin.tian@intel.com>
On 2023/12/22 12:23, Tian, Kevin wrote: >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Thursday, December 21, 2023 11:40 PM >> >> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; >> + if (fault) { >> + if (fsts) >> + *fsts |= fault; > > do we expect the fault to be accumulated? otherwise it's clearer to > just do direct assignment instead of asking for the caller to clear > the variable before invocation. not quite get. do you mean the fault should not be cleared in the caller side? > the rest looks good: > > Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Tuesday, December 26, 2023 12:03 PM > > On 2023/12/22 12:23, Tian, Kevin wrote: > >> From: Liu, Yi L <yi.l.liu@intel.com> > >> Sent: Thursday, December 21, 2023 11:40 PM > >> > >> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; > >> + if (fault) { > >> + if (fsts) > >> + *fsts |= fault; > > > > do we expect the fault to be accumulated? otherwise it's clearer to > > just do direct assignment instead of asking for the caller to clear > > the variable before invocation. > > not quite get. do you mean the fault should not be cleared in the caller > side? > I meant: if (fsts) *fsts = fault; unless there is a reason to *OR* the original value.
On 2023/12/22 14:57, Tian, Kevin wrote: >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Thursday, December 21, 2023 11:40 PM >> >> + >> +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 >> addr, >> + unsigned long npages, u32 *error) >> +{ >> + struct iommu_domain_info *info; >> + unsigned long i; >> + unsigned mask; >> + u32 fault = 0; >> + >> + if (npages == U64_MAX) >> + mask = 64 - VTD_PAGE_SHIFT; >> + else >> + mask = ilog2(__roundup_pow_of_two(npages)); >> + >> + xa_for_each(&domain->iommu_array, i, info) { >> + nested_flush_pasid_iotlb(info->iommu, domain, addr, >> npages, 0); > > so IOMMU_VTD_INV_FLAGS_LEAF is defined but ignored? yeah... it is. It is named as ih in the driver code. But it appears only the below code is set ih. When calling iommu_flush_iotlb_psi(), the 5th parameter (ih) may be true. static int intel_iommu_memory_notifier(struct notifier_block *nb, unsigned long val, void *v) { struct memory_notify *mhp = v; unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn); unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1); switch (val) { case MEM_GOING_ONLINE: if (iommu_domain_identity_map(si_domain, start_vpfn, last_vpfn)) { pr_warn("Failed to build identity map for [%lx-%lx]\n", start_vpfn, last_vpfn); return NOTIFY_BAD; } break; case MEM_OFFLINE: case MEM_CANCEL_ONLINE: { struct dmar_drhd_unit *drhd; struct intel_iommu *iommu; LIST_HEAD(freelist); domain_unmap(si_domain, start_vpfn, last_vpfn, &freelist); rcu_read_lock(); for_each_active_iommu(iommu, drhd) iommu_flush_iotlb_psi(iommu, si_domain, start_vpfn, mhp->nr_pages, list_empty(&freelist), 0); rcu_read_unlock(); put_pages_list(&freelist); } break; } return NOTIFY_OK; } > >> + >> + if (domain->has_iotlb_device) >> + continue; >> + >> + nested_flush_dev_iotlb(domain, addr, mask, &fault); >> + if (fault & (DMA_FSTS_ITE | DMA_FSTS_ICE)) >> + break; > > here you may add a note that we don't plan to forward invalidation > queue error (i.e. IQE) to the caller as it's caused only by driver > internal bug. yes. > >> + >> + if (!IS_ALIGNED(inv_entry.addr, VTD_PAGE_SIZE) || >> + ((inv_entry.npages == U64_MAX) && inv_entry.addr)) { >> + ret = -EINVAL; >> + break; >> + } >> + > > why is [non-zero-addr, U64_MAX] an error? Is it explicitly stated to > be not supported by underlying helpers? no such limitation by underlying helpers. But in such case, the addr+npages*PAGE_SIZE would exceed U64_MAX, this seems a bit strange. But I'm fine to relax the check since the underlying helper only checks npages when determining paid-selective or not.
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Tuesday, December 26, 2023 12:52 PM > >> + > >> + if (!IS_ALIGNED(inv_entry.addr, VTD_PAGE_SIZE) || > >> + ((inv_entry.npages == U64_MAX) && inv_entry.addr)) { > >> + ret = -EINVAL; > >> + break; > >> + } > >> + > > > > why is [non-zero-addr, U64_MAX] an error? Is it explicitly stated to > > be not supported by underlying helpers? > > no such limitation by underlying helpers. But in such case, the > addr+npages*PAGE_SIZE would exceed U64_MAX, this seems a bit > strange. But I'm fine to relax the check since the underlying helper > only checks npages when determining paid-selective or not. > I overlooked npages as end. let's keep the check.
On 2023/12/26 12:13, Tian, Kevin wrote: >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Tuesday, December 26, 2023 12:03 PM >> >> On 2023/12/22 12:23, Tian, Kevin wrote: >>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>> Sent: Thursday, December 21, 2023 11:40 PM >>>> >>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; >>>> + if (fault) { >>>> + if (fsts) >>>> + *fsts |= fault; >>> >>> do we expect the fault to be accumulated? otherwise it's clearer to >>> just do direct assignment instead of asking for the caller to clear >>> the variable before invocation. >> >> not quite get. do you mean the fault should not be cleared in the caller >> side? >> > > I meant: > > if (fsts) > *fsts = fault; > > unless there is a reason to *OR* the original value. I guess no such a reason. :) let me modify it.
On 2023/12/26 14:15, Yi Liu wrote: > > > On 2023/12/26 12:13, Tian, Kevin wrote: >>> From: Liu, Yi L <yi.l.liu@intel.com> >>> Sent: Tuesday, December 26, 2023 12:03 PM >>> >>> On 2023/12/22 12:23, Tian, Kevin wrote: >>>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>>> Sent: Thursday, December 21, 2023 11:40 PM >>>>> >>>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; >>>>> + if (fault) { >>>>> + if (fsts) >>>>> + *fsts |= fault; >>>> >>>> do we expect the fault to be accumulated? otherwise it's clearer to >>>> just do direct assignment instead of asking for the caller to clear >>>> the variable before invocation. >>> >>> not quite get. do you mean the fault should not be cleared in the caller >>> side? >>> >> >> I meant: >> >> if (fsts) >> *fsts = fault; >> >> unless there is a reason to *OR* the original value. > > I guess no such a reason. :) let me modify it. hmmm, replied too soon. The qi_check_fault() would be called multiple times in one invalidation circle as qi_submit_sync() needs to see if any fault happened before the hw writes back QI_DONE to the wait descriptor. There can be ICE which may eventually result in ITE. So caller of qi_check_fault() would continue to wait for QI_DONE. So qi_check_fault() returns 0 to let qi_submit_sync() go on though ICE detected. If we use '*fsts = fault;', then ICE would be missed since the input fsts pointer is the same in one qi_submit_sync() call.
On 2023/12/22 19:59, Liu, Yi L wrote: > >> On Dec 22, 2023, at 15:12, Tian, Kevin <kevin.tian@intel.com> wrote: >> >> >>> >>> From: Liu, Yi L <yi.l.liu@intel.com> >>> Sent: Friday, December 22, 2023 3:02 PM >>> >>> >>>>> On Dec 22, 2023, at 14:47, Tian, Kevin <kevin.tian@intel.com> wrote: >>>> >>>> >>>>> >>>>> From: Yang, Weijiang <weijiang.yang@intel.com> >>>>> Sent: Friday, December 22, 2023 11:56 AM >>>>>> + >>>>>> + xa_for_each(&domain->iommu_array, i, info) { >>>>>> + nested_flush_pasid_iotlb(info->iommu, domain, addr, >>>>> npages, 0); >>>>>> + >>>>>> + if (domain->has_iotlb_device) >>>>>> + continue; >>>>> >>>>> Shouldn't this be if (!domain->has_iotlb_device)? >>>> >>>> yes that is wrong. >>>> >>>> actually it's weird to put domain check in a loop of domain->iommu_array. >>>> >>>> that check along with devtlb flush should be done out of that loop. >>> >>> Maybe adding a bool, set it out of the loop, check the bool in the loop. >> >> the point is that dev iotlb doesn't rely on info->iommu: >> >> nested_flush_dev_iotlb(domain, addr, mask, &fault); >> >> then why do it in the loop of info->iommu? > > yes. It should have another device loop instead. let me move the device tlb related code out of the info->iommu loop.
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Tuesday, December 26, 2023 4:44 PM > > On 2023/12/26 14:15, Yi Liu wrote: > > > > > > On 2023/12/26 12:13, Tian, Kevin wrote: > >>> From: Liu, Yi L <yi.l.liu@intel.com> > >>> Sent: Tuesday, December 26, 2023 12:03 PM > >>> > >>> On 2023/12/22 12:23, Tian, Kevin wrote: > >>>>> From: Liu, Yi L <yi.l.liu@intel.com> > >>>>> Sent: Thursday, December 21, 2023 11:40 PM > >>>>> > >>>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; > >>>>> + if (fault) { > >>>>> + if (fsts) > >>>>> + *fsts |= fault; > >>>> > >>>> do we expect the fault to be accumulated? otherwise it's clearer to > >>>> just do direct assignment instead of asking for the caller to clear > >>>> the variable before invocation. > >>> > >>> not quite get. do you mean the fault should not be cleared in the caller > >>> side? > >>> > >> > >> I meant: > >> > >> if (fsts) > >> *fsts = fault; > >> > >> unless there is a reason to *OR* the original value. > > > > I guess no such a reason. :) let me modify it. > > hmmm, replied too soon. The qi_check_fault() would be called multiple > times in one invalidation circle as qi_submit_sync() needs to see if any > fault happened before the hw writes back QI_DONE to the wait descriptor. > There can be ICE which may eventually result in ITE. So caller of > qi_check_fault() > would continue to wait for QI_DONE. So qi_check_fault() returns 0 to let > qi_submit_sync() go on though ICE detected. If we use '*fsts = fault;', > then ICE would be missed since the input fsts pointer is the same in > one qi_submit_sync() call. > ok, that makes sense then.
On 2023/12/26 12:51, Yi Liu wrote: > On 2023/12/22 14:57, Tian, Kevin wrote: >>> From: Liu, Yi L <yi.l.liu@intel.com> >>> Sent: Thursday, December 21, 2023 11:40 PM >>> >>> + >>> +static void intel_nested_flush_cache(struct dmar_domain *domain, u64 >>> addr, >>> + unsigned long npages, u32 *error) >>> +{ >>> + struct iommu_domain_info *info; >>> + unsigned long i; >>> + unsigned mask; >>> + u32 fault = 0; >>> + >>> + if (npages == U64_MAX) >>> + mask = 64 - VTD_PAGE_SHIFT; >>> + else >>> + mask = ilog2(__roundup_pow_of_two(npages)); >>> + >>> + xa_for_each(&domain->iommu_array, i, info) { >>> + nested_flush_pasid_iotlb(info->iommu, domain, addr, >>> npages, 0); >> >> so IOMMU_VTD_INV_FLAGS_LEAF is defined but ignored? > > yeah... it is. It is named as ih in the driver code. But it appears only > the below code is set ih. When calling iommu_flush_iotlb_psi(), the 5th > parameter (ih) may be true. > > static int intel_iommu_memory_notifier(struct notifier_block *nb, > unsigned long val, void *v) > { > struct memory_notify *mhp = v; > unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn); > unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn + > mhp->nr_pages - 1); > > switch (val) { > case MEM_GOING_ONLINE: > if (iommu_domain_identity_map(si_domain, > start_vpfn, last_vpfn)) { > pr_warn("Failed to build identity map for [%lx-%lx]\n", > start_vpfn, last_vpfn); > return NOTIFY_BAD; > } > break; > > case MEM_OFFLINE: > case MEM_CANCEL_ONLINE: > { > struct dmar_drhd_unit *drhd; > struct intel_iommu *iommu; > LIST_HEAD(freelist); > > domain_unmap(si_domain, start_vpfn, last_vpfn, &freelist); > > rcu_read_lock(); > for_each_active_iommu(iommu, drhd) > iommu_flush_iotlb_psi(iommu, si_domain, > start_vpfn, mhp->nr_pages, > list_empty(&freelist), 0); > rcu_read_unlock(); > put_pages_list(&freelist); > } > break; > } > > return NOTIFY_OK; > } I passed this flag to the intel_nested_flush_cache() now as the helper accepts an ih parameter.
>-----Original Message----- >From: Liu, Yi L <yi.l.liu@intel.com> >Sent: Tuesday, December 26, 2023 4:44 PM >Subject: Re: [PATCH v7 7/9] iommu/vt-d: Allow qi_submit_sync() to return >the QI faults > >On 2023/12/26 14:15, Yi Liu wrote: >> >> >> On 2023/12/26 12:13, Tian, Kevin wrote: >>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>> Sent: Tuesday, December 26, 2023 12:03 PM >>>> >>>> On 2023/12/22 12:23, Tian, Kevin wrote: >>>>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>>>> Sent: Thursday, December 21, 2023 11:40 PM >>>>>> >>>>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; >>>>>> + if (fault) { >>>>>> + if (fsts) >>>>>> + *fsts |= fault; >>>>> >>>>> do we expect the fault to be accumulated? otherwise it's clearer to >>>>> just do direct assignment instead of asking for the caller to clear >>>>> the variable before invocation. >>>> >>>> not quite get. do you mean the fault should not be cleared in the caller >>>> side? >>>> >>> >>> I meant: >>> >>> if (fsts) >>> *fsts = fault; >>> >>> unless there is a reason to *OR* the original value. >> >> I guess no such a reason. :) let me modify it. > >hmmm, replied too soon. The qi_check_fault() would be called multiple >times in one invalidation circle as qi_submit_sync() needs to see if any >fault happened before the hw writes back QI_DONE to the wait descriptor. >There can be ICE which may eventually result in ITE. So caller of >qi_check_fault() >would continue to wait for QI_DONE. So qi_check_fault() returns 0 to let >qi_submit_sync() go on though ICE detected. If we use '*fsts = fault;', >then ICE would be missed since the input fsts pointer is the same in >one qi_submit_sync() call. Is it necessary to return fault to user if qi_check_fault() return -EAGAIN and a restart run succeeds? Thanks Zhenzhong
On 12/27/2023 5:06 PM, Duan, Zhenzhong wrote: > >> -----Original Message----- >> From: Liu, Yi L <yi.l.liu@intel.com> >> Sent: Tuesday, December 26, 2023 4:44 PM >> Subject: Re: [PATCH v7 7/9] iommu/vt-d: Allow qi_submit_sync() to return >> the QI faults >> >> On 2023/12/26 14:15, Yi Liu wrote: >>> >>> On 2023/12/26 12:13, Tian, Kevin wrote: >>>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>>> Sent: Tuesday, December 26, 2023 12:03 PM >>>>> >>>>> On 2023/12/22 12:23, Tian, Kevin wrote: >>>>>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>>>>> Sent: Thursday, December 21, 2023 11:40 PM >>>>>>> >>>>>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; >>>>>>> + if (fault) { >>>>>>> + if (fsts) >>>>>>> + *fsts |= fault; >>>>>> do we expect the fault to be accumulated? otherwise it's clearer to >>>>>> just do direct assignment instead of asking for the caller to clear >>>>>> the variable before invocation. >>>>> not quite get. do you mean the fault should not be cleared in the caller >>>>> side? >>>>> >>>> I meant: >>>> >>>> if (fsts) >>>> *fsts = fault; >>>> >>>> unless there is a reason to *OR* the original value. >>> I guess no such a reason. :) let me modify it. >> hmmm, replied too soon. The qi_check_fault() would be called multiple >> times in one invalidation circle as qi_submit_sync() needs to see if any >> fault happened before the hw writes back QI_DONE to the wait descriptor. >> There can be ICE which may eventually result in ITE. So caller of >> qi_check_fault() >> would continue to wait for QI_DONE. So qi_check_fault() returns 0 to let >> qi_submit_sync() go on though ICE detected. If we use '*fsts = fault;', >> then ICE would be missed since the input fsts pointer is the same in >> one qi_submit_sync() call. > Is it necessary to return fault to user if qi_check_fault() return -EAGAIN and > a restart run succeeds? Issue a device-TLB invalidation to no response device there is possibility will be trapped there loop for ITE , never get return. Thanks, Ethan > Thanks > Zhenzhong
On 2023/12/27 17:33, Ethan Zhao wrote: > > On 12/27/2023 5:06 PM, Duan, Zhenzhong wrote: >> >>> -----Original Message----- >>> From: Liu, Yi L <yi.l.liu@intel.com> >>> Sent: Tuesday, December 26, 2023 4:44 PM >>> Subject: Re: [PATCH v7 7/9] iommu/vt-d: Allow qi_submit_sync() to return >>> the QI faults >>> >>> On 2023/12/26 14:15, Yi Liu wrote: >>>> >>>> On 2023/12/26 12:13, Tian, Kevin wrote: >>>>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>>>> Sent: Tuesday, December 26, 2023 12:03 PM >>>>>> >>>>>> On 2023/12/22 12:23, Tian, Kevin wrote: >>>>>>>> From: Liu, Yi L <yi.l.liu@intel.com> >>>>>>>> Sent: Thursday, December 21, 2023 11:40 PM >>>>>>>> >>>>>>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; >>>>>>>> + if (fault) { >>>>>>>> + if (fsts) >>>>>>>> + *fsts |= fault; >>>>>>> do we expect the fault to be accumulated? otherwise it's clearer to >>>>>>> just do direct assignment instead of asking for the caller to clear >>>>>>> the variable before invocation. >>>>>> not quite get. do you mean the fault should not be cleared in the caller >>>>>> side? >>>>>> >>>>> I meant: >>>>> >>>>> if (fsts) >>>>> *fsts = fault; >>>>> >>>>> unless there is a reason to *OR* the original value. >>>> I guess no such a reason. :) let me modify it. >>> hmmm, replied too soon. The qi_check_fault() would be called multiple >>> times in one invalidation circle as qi_submit_sync() needs to see if any >>> fault happened before the hw writes back QI_DONE to the wait descriptor. >>> There can be ICE which may eventually result in ITE. So caller of >>> qi_check_fault() >>> would continue to wait for QI_DONE. So qi_check_fault() returns 0 to let >>> qi_submit_sync() go on though ICE detected. If we use '*fsts = fault;', >>> then ICE would be missed since the input fsts pointer is the same in >>> one qi_submit_sync() call. >> Is it necessary to return fault to user if qi_check_fault() return >> -EAGAIN and >> a restart run succeeds? no need if a restart succeeds. I would add a *fault = 0 per the restart. > > Issue a device-TLB invalidation to no response device there is possibility > > will be trapped there loop for ITE , never get return. yes. This the implementation today, in future I think we may need a kind of timeout mechanism, so that it can return and report the error to user. In concept, in nested translation, the page table is owned by userspace, so it makes more sense to let userspace know it and take proper action.
> From: Liu, Yi L <yi.l.liu@intel.com> > Sent: Wednesday, December 27, 2023 10:13 PM > > On 2023/12/27 17:33, Ethan Zhao wrote: > > > > On 12/27/2023 5:06 PM, Duan, Zhenzhong wrote: > >> > >>> -----Original Message----- > >>> From: Liu, Yi L <yi.l.liu@intel.com> > >>> Sent: Tuesday, December 26, 2023 4:44 PM > >>> Subject: Re: [PATCH v7 7/9] iommu/vt-d: Allow qi_submit_sync() to > return > >>> the QI faults > >>> > >>> On 2023/12/26 14:15, Yi Liu wrote: > >>>> > >>>> On 2023/12/26 12:13, Tian, Kevin wrote: > >>>>>> From: Liu, Yi L <yi.l.liu@intel.com> > >>>>>> Sent: Tuesday, December 26, 2023 12:03 PM > >>>>>> > >>>>>> On 2023/12/22 12:23, Tian, Kevin wrote: > >>>>>>>> From: Liu, Yi L <yi.l.liu@intel.com> > >>>>>>>> Sent: Thursday, December 21, 2023 11:40 PM > >>>>>>>> > >>>>>>>> + fault &= DMA_FSTS_IQE | DMA_FSTS_ITE | DMA_FSTS_ICE; > >>>>>>>> + if (fault) { > >>>>>>>> + if (fsts) > >>>>>>>> + *fsts |= fault; > >>>>>>> do we expect the fault to be accumulated? otherwise it's clearer to > >>>>>>> just do direct assignment instead of asking for the caller to clear > >>>>>>> the variable before invocation. > >>>>>> not quite get. do you mean the fault should not be cleared in the > caller > >>>>>> side? > >>>>>> > >>>>> I meant: > >>>>> > >>>>> if (fsts) > >>>>> *fsts = fault; > >>>>> > >>>>> unless there is a reason to *OR* the original value. > >>>> I guess no such a reason. :) let me modify it. > >>> hmmm, replied too soon. The qi_check_fault() would be called multiple > >>> times in one invalidation circle as qi_submit_sync() needs to see if any > >>> fault happened before the hw writes back QI_DONE to the wait > descriptor. > >>> There can be ICE which may eventually result in ITE. So caller of > >>> qi_check_fault() > >>> would continue to wait for QI_DONE. So qi_check_fault() returns 0 to let > >>> qi_submit_sync() go on though ICE detected. If we use '*fsts = fault;', > >>> then ICE would be missed since the input fsts pointer is the same in > >>> one qi_submit_sync() call. > >> Is it necessary to return fault to user if qi_check_fault() return > >> -EAGAIN and > >> a restart run succeeds? > > no need if a restart succeeds. I would add a *fault = 0 per the restart. > > > > > Issue a device-TLB invalidation to no response device there is possibility > > > > will be trapped there loop for ITE , never get return. > > yes. This the implementation today, in future I think we may need a kind > of timeout mechanism, so that it can return and report the error to user. > In concept, in nested translation, the page table is owned by userspace, so > it makes more sense to let userspace know it and take proper action. > it doesn't make sense to retry upon an invalidation request from userspace. if retry is required that is the policy of guest iommu driver. Also it's not good to introduce a uapi flag which won't be set by current driver. this can be solved by a simple change in qi_check_fault(): if (qi->desc_status[wait_index] == QI_ABORT) - return -EAGAIN; + return fsts ? -ETIMEDOUT : -EAGAIN; because if the caller wants to know the fault reason the implication is that the caller will decide how to cope with the fault. It is incorrect for qi_check_fault() to decide.