From patchwork Thu Feb 6 10:34:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Neronin X-Patchwork-Id: 862947 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B750322686A for ; Thu, 6 Feb 2025 10:34:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738838078; cv=none; b=Z4u6+NMx0P+rXjgwxd/VJsqSmWkBX8C0y4pEQow0WuhyCVVcFcjfaSRNrXqFxmcvVUzFtOm3lXQjZg2vghyBIMC4tPK25jlhfJxH+fzNtbr++WiLeaj2kNcYeiK/vZnMHWxtIKZkAzXn6NtgLWKVrGubWqjmiqsSBz3HIyajXNk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738838078; c=relaxed/simple; bh=60CRDsEvulNbEfbNZnNV5/PmKL+pRLbW8uMZClyl60U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=byL8cjZDpuWdnR0OszileH5Jey9OcTVxR1aIsrJZ5Ck8eI0qH5Mamkg75FtDPfpM8tjWT/3l9roeNrFy1WmN8VO7NUB5vOQ4/+XujQcbQiAQjA+Fu2UlfrC1mSaR7XOKPNjUk+r7y8oZQQ/7eQF2XJ50LB/O38d7XFZA7MdhT+Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=d2U44Ekh; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="d2U44Ekh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738838077; x=1770374077; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=60CRDsEvulNbEfbNZnNV5/PmKL+pRLbW8uMZClyl60U=; b=d2U44EkhBouZSySWM3kYmdu9KM4L1Gr2KdLwSYIGNDfrqAF4U9SP+COP 5oxxdMJL3HcLKpNB2FBwz22V486ijFNehuV/r8cggQLc3eq4zj6P670Pr CJvQZ13x2xuj7l3IAqWSVqlnrBzlGmmhVzlMNxLzx2oCdOSyxQ9BsOiY3 0KCv0DiotD4QlpRzl67KHn2zvsNNjZtbCSnPcJP6KbPFUmbJwgQtf+7rf L5UFl0iBcIBDZXCLN5lpiX3diVGnnCzAk16zmmJZodhJ9O+EnAcvZrX// AKJdfSjpQZHzti2G69nHHNB2sLjhMVQsPReDZ8iQWDAJu9WmbBZcYc6pA A==; X-CSE-ConnectionGUID: Q+JXVaL8TUCCnZLKNqgPZQ== X-CSE-MsgGUID: 1kjQDfiiTmqmlahSBRbQOQ== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="43189409" X-IronPort-AV: E=Sophos;i="6.13,264,1732608000"; d="scan'208";a="43189409" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2025 02:34:37 -0800 X-CSE-ConnectionGUID: 7mMLFSe1TfyZcKF+ppglNA== X-CSE-MsgGUID: rLKTPg5DTsuQDAB7MHKMoA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="116364495" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa005.jf.intel.com with ESMTP; 06 Feb 2025 02:34:35 -0800 Received: by black.fi.intel.com (Postfix, from userid 1058) id 68963EE; Thu, 06 Feb 2025 12:34:34 +0200 (EET) From: Niklas Neronin To: mathias.nyman@linux.intel.com Cc: linux-usb@vger.kernel.org, Niklas Neronin Subject: [PATCH 2/4] usb: xhci: move debug capabilities from trb_in_td() to handle_tx_event() Date: Thu, 6 Feb 2025 12:34:26 +0200 Message-ID: <20250206103428.1034784-3-niklas.neronin@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206103428.1034784-1-niklas.neronin@linux.intel.com> References: <20250206103428.1034784-1-niklas.neronin@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Function trb_in_td() currently includes debug capabilities that are triggered when its debug argument is set to true. The only consumer of these debug capabilities is handle_tx_event(), which calls trb_in_td() twice, once for its primary functionality and a second time solely for debugging purposes if the first call returns 'NULL'. This approach is inefficient and can lead to confusion, as trb_in_td() executes the same code with identical arguments twice, differing only in the debug output during the second execution. To enhance clarity and efficiency, move the debug capabilities out of trb_in_td() and integrates them directly into handle_tx_event(). This change reduces the argument count of trb_in_td() and ensures that debug steps are executed only when necessary, streamlining the function's operation. Signed-off-by: Niklas Neronin --- drivers/usb/host/xhci-ring.c | 40 +++++++++++++++++------------------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 467a3abf8f53..a69972cc400c 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -281,8 +281,7 @@ static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, * If the suspect DMA address is a TRB in this TD, this function returns that * TRB's segment. Otherwise it returns 0. */ -static struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, - dma_addr_t suspect_dma, bool debug) +static struct xhci_segment *trb_in_td(struct xhci_td *td, dma_addr_t suspect_dma) { dma_addr_t start_dma; dma_addr_t end_seg_dma; @@ -301,15 +300,6 @@ static struct xhci_segment *trb_in_td(struct xhci_hcd *xhci, struct xhci_td *td, /* If the end TRB isn't in this segment, this is set to 0 */ end_trb_dma = xhci_trb_virt_to_dma(cur_seg, td->end_trb); - if (debug) - xhci_warn(xhci, - "Looking for event-dma %016llx trb-start %016llx trb-end %016llx seg-start %016llx seg-end %016llx\n", - (unsigned long long)suspect_dma, - (unsigned long long)start_dma, - (unsigned long long)end_trb_dma, - (unsigned long long)cur_seg->dma, - (unsigned long long)end_seg_dma); - if (end_trb_dma > 0) { /* The end TRB is in this segment, so suspect should be here */ if (start_dma <= end_trb_dma) { @@ -1075,7 +1065,7 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep) td->urb->stream_id); hw_deq &= ~0xf; - if (td->cancel_status == TD_HALTED || trb_in_td(xhci, td, hw_deq, false)) { + if (td->cancel_status == TD_HALTED || trb_in_td(td, hw_deq)) { switch (td->cancel_status) { case TD_CLEARED: /* TD is already no-op */ case TD_CLEARING_CACHE: /* set TR deq command already queued */ @@ -1165,7 +1155,7 @@ static struct xhci_td *find_halted_td(struct xhci_virt_ep *ep) hw_deq = xhci_get_hw_deq(ep->xhci, ep->vdev, ep->ep_index, 0); hw_deq &= ~0xf; td = list_first_entry(&ep->ring->td_list, struct xhci_td, td_list); - if (trb_in_td(ep->xhci, td, hw_deq, false)) + if (trb_in_td(td, hw_deq)) return td; } return NULL; @@ -2832,7 +2822,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, */ td = list_first_entry_or_null(&ep_ring->td_list, struct xhci_td, td_list); - if (td && td->error_mid_td && !trb_in_td(xhci, td, ep_trb_dma, false)) { + if (td && td->error_mid_td && !trb_in_td(td, ep_trb_dma)) { xhci_dbg(xhci, "Missing TD completion event after mid TD error\n"); xhci_dequeue_td(xhci, td, ep_ring, td->status); } @@ -2860,7 +2850,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, td_list); /* Is this a TRB in the currently executing TD? */ - ep_seg = trb_in_td(xhci, td, ep_trb_dma, false); + ep_seg = trb_in_td(td, ep_trb_dma); if (!ep_seg) { @@ -2899,12 +2889,7 @@ static int handle_tx_event(struct xhci_hcd *xhci, } /* HC is busted, give up! */ - xhci_err(xhci, - "ERROR Transfer event TRB DMA ptr not part of current TD ep_index %d comp_code %u\n", - ep_index, trb_comp_code); - trb_in_td(xhci, td, ep_trb_dma, true); - - return -ESHUTDOWN; + goto debug_finding_td; } if (ep->skip) { @@ -2957,6 +2942,19 @@ static int handle_tx_event(struct xhci_hcd *xhci, return 0; +debug_finding_td: + xhci_err(xhci, "Transfer event %u ep %d dma %016llx not part of current TD start %016llx end %016llx\n", + trb_comp_code, ep_index, (unsigned long long)ep_trb_dma, + (unsigned long long)xhci_trb_virt_to_dma(td->start_seg, td->start_trb), + (unsigned long long)xhci_trb_virt_to_dma(td->end_seg, td->end_trb)); + + xhci_for_each_ring_seg(ep_ring->first_seg, ep_seg) { + xhci_warn(xhci, "Ring seg %u trb start %016llx end %016llx\n", ep_seg->num, + (unsigned long long)ep_seg->dma, + (unsigned long long)(ep_seg->dma + TRB_SEGMENT_SIZE)); + } + return -ESHUTDOWN; + err_out: xhci_err(xhci, "@%016llx %08x %08x %08x %08x\n", (unsigned long long) xhci_trb_virt_to_dma( From patchwork Thu Feb 6 10:34:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Neronin X-Patchwork-Id: 862946 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FF2B18B03 for ; Thu, 6 Feb 2025 10:34:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738838083; cv=none; b=DmqUyYBX2pZCbxz0v6Blm/8XO+qQQwcd13gdwHh36zfBYG/BMrOSsPKcdnRirUxUw0ZFksMscqj2+pyrb+hCbUB5Bh1++0s9z8DslDb+n7UZ6Db8o5ISB44UjQ6mxq+y7MrnVrQWuQuPxbSmMGm5cxET17T6n2D4DDJTm5MZmcw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738838083; c=relaxed/simple; bh=2JiUcOM4qqRQm9DmI9zM1bg0+0tO17pH1C/43gMwfCo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rSPnc8b4bU0FQyJI8F4qWxNZNVipZQRqsstwNFEjkCregLMqQ/bO0K0EmoMljbKxKzACxleODyuv2J8fYAKpdB4Aqxgxe7WgTiqvzExiQmBkeebBGmO1zicez24+zjaG6BFe8otHbmOFDTQ8tFn749dsESdAM59LFbhR4CGrcNU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PP+iz9W5; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PP+iz9W5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1738838082; x=1770374082; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2JiUcOM4qqRQm9DmI9zM1bg0+0tO17pH1C/43gMwfCo=; b=PP+iz9W5ZcawRmMfHMRxwpKwCxA/EtLJVIW6fegvYHkVMn/I+p+IHBfS lYfCqyJWYDLVdK/4W4c9PrfIKK8ETe5geR3OllUnieJnpuQMAL/WsFke+ 3PoW8MQ6iX+PQe6/6R7EnW3XVgq+EEDC/J2q9fuNcoobtqESJ0IKISISX J+sKKCuWWMsuajqZqJXD6b6fbsYXt6geO6O5U/xjnRWFk0UW5gSBZfWNA RXeI72dd8b3kFqJ3+Ul1d3BXRnbACO+qOkJE2dz4/dLrEyL6rHCacvqBI yG019OWVaweT3K+u7X8PAONuTUPqZZ64/rfZBwv4wpFNEgPflswSP1fAB Q==; X-CSE-ConnectionGUID: 2K1IUtEtRT6fTuNk5XFAjg== X-CSE-MsgGUID: wa1D6dZZTiqGK0BwBgHHng== X-IronPort-AV: E=McAfee;i="6700,10204,11336"; a="43189420" X-IronPort-AV: E=Sophos;i="6.13,264,1732608000"; d="scan'208";a="43189420" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Feb 2025 02:34:41 -0800 X-CSE-ConnectionGUID: veHBgPZlRZSXIUXBBc5XjA== X-CSE-MsgGUID: lOXdCIz8T9qJ55txsDytTw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="116364501" Received: from black.fi.intel.com ([10.237.72.28]) by orviesa005.jf.intel.com with ESMTP; 06 Feb 2025 02:34:39 -0800 Received: by black.fi.intel.com (Postfix, from userid 1058) id DF53EEE; Thu, 06 Feb 2025 12:34:38 +0200 (EET) From: Niklas Neronin To: mathias.nyman@linux.intel.com Cc: linux-usb@vger.kernel.org, Niklas Neronin Subject: [PATCH 4/4] usb: xhci: modify trb_in_td() to be more modular Date: Thu, 6 Feb 2025 12:34:28 +0200 Message-ID: <20250206103428.1034784-5-niklas.neronin@linux.intel.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206103428.1034784-1-niklas.neronin@linux.intel.com> References: <20250206103428.1034784-1-niklas.neronin@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Lay the ground work for future handle_tx_event() rework, which will require a function which checks if a DMA address is in queueu. To its core, trb_in_td() checks if a TRB falls within the specified start and end TRB/segment range, a common requirement. For instance, a ring has pointers to the queues first and last TRB/segment, which means that with slight modifications and renaming trb_in_td() could work for other structures not only TDs. Modify trb_in_td() to accept pointer to start and end TRB/segment, and introduce a new function that takes a 'xhci_td' struct pointer, forwarding its elements to dma_in_range(), previously trb_in_td(). Signed-off-by: Niklas Neronin --- drivers/usb/host/xhci-ring.c | 41 ++++++++++++++++++++++-------------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 23337c9d34c1..34699038b7f2 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -278,24 +278,28 @@ static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, } /* - * If the suspect DMA address is a TRB in this TD, this function returns that - * TRB's segment. Otherwise it returns 0. + * Check if the DMA address of a TRB falls within the specified range. + * The range is defined by 'start_trb' in 'start_seg' and 'end_trb' in 'end_seg'. + * If the TRB's DMA address is within this range, return the segment containing the TRB. + * Otherwise, return 'NULL'. */ -static struct xhci_segment *trb_in_td(struct xhci_td *td, dma_addr_t dma) +static struct xhci_segment *dma_in_range(struct xhci_segment *start_seg, union xhci_trb *start_trb, + struct xhci_segment *end_seg, union xhci_trb *end_trb, + dma_addr_t dma) { - struct xhci_segment *seg = td->start_seg; + struct xhci_segment *seg = start_seg; - if (td->start_seg == td->end_seg) { - if (td->start_trb <= td->end_trb) { - if (xhci_trb_virt_to_dma(td->start_seg, td->start_trb) <= dma && - dma <= xhci_trb_virt_to_dma(td->end_seg, td->end_trb)) + if (start_seg == end_seg) { + if (start_trb <= end_trb) { + if (xhci_trb_virt_to_dma(start_seg, start_trb) <= dma && + dma <= xhci_trb_virt_to_dma(end_seg, end_trb)) return seg; return NULL; } /* Edge case, the TD wrapped around to the start segment. */ - if (xhci_trb_virt_to_dma(td->end_seg, td->end_trb) < dma && - dma < xhci_trb_virt_to_dma(td->start_seg, td->start_trb)) + if (xhci_trb_virt_to_dma(end_seg, end_trb) < dma && + dma < xhci_trb_virt_to_dma(start_seg, start_trb)) return NULL; if (seg->dma <= dma && dma <= (seg->dma + TRB_SEGMENT_SIZE)) return seg; @@ -304,24 +308,29 @@ static struct xhci_segment *trb_in_td(struct xhci_td *td, dma_addr_t dma) /* Loop through segment which don't contain the DMA address. */ while (dma < seg->dma || (seg->dma + TRB_SEGMENT_SIZE) <= dma) { - if (seg == td->end_seg) + if (seg == end_seg) return NULL; seg = seg->next; - if (seg == td->start_seg) + if (seg == start_seg) return NULL; } - if (seg == td->start_seg) { - if (dma < xhci_trb_virt_to_dma(td->start_seg, td->start_trb)) + if (seg == start_seg) { + if (dma < xhci_trb_virt_to_dma(start_seg, start_trb)) return NULL; - } else if (seg == td->end_seg) { - if (xhci_trb_virt_to_dma(td->end_seg, td->end_trb) < dma) + } else if (seg == end_seg) { + if (xhci_trb_virt_to_dma(end_seg, end_trb) < dma) return NULL; } return seg; } +static struct xhci_segment *trb_in_td(struct xhci_td *td, dma_addr_t dma) +{ + return dma_in_range(td->start_seg, td->start_trb, td->end_seg, td->end_trb, dma); +} + /* * Return number of free normal TRBs from enqueue to dequeue pointer on ring. * Not counting an assumed link TRB at end of each TRBS_PER_SEGMENT sized segment.