Message ID | 20250519-mhi_bw_up-v3-0-3acd4a17bbb5@oss.qualcomm.com |
---|---|
Headers | show |
Series | bus: mhi: host: Add support for mhi bus bw | expand |
On Mon, 19 May 2025, Krishna Chaitanya Chundru wrote: > If the driver wants to move to higher data rate/speed than the current data > rate then the controller driver may need to change certain votes so that > link may come up at requested data rate/speed like QCOM PCIe controllers > need to change their RPMh (Resource Power Manager-hardened) state. Once > link retraining is done controller drivers needs to adjust their votes > based on the final data rate. > > Some controllers also may need to update their bandwidth voting like > ICC bw votings etc. > > So, add pre_scale_bus_bw() & post_scale_bus_bw() op to call before & after > the link re-train. There is no explicit locking mechanisms as these are > called by a single client endpoint driver. > > In case of PCIe switch, if there is a request to change target speed for a > downstream port then no need to call these function ops as these are > outside the scope of the controller drivers. > > Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com> > --- > drivers/pci/pcie/bwctrl.c | 15 +++++++++++++++ > include/linux/pci.h | 14 ++++++++++++++ > 2 files changed, 29 insertions(+) > > diff --git a/drivers/pci/pcie/bwctrl.c b/drivers/pci/pcie/bwctrl.c > index d8d2aa85a22928b99c5bba1d2bcc5647c0edeeb6..3525bc0cd10f1dd7794abbe84ccb10e2c53a10af 100644 > --- a/drivers/pci/pcie/bwctrl.c > +++ b/drivers/pci/pcie/bwctrl.c > @@ -161,6 +161,8 @@ static int pcie_bwctrl_change_speed(struct pci_dev *port, u16 target_speed, bool > int pcie_set_target_speed(struct pci_dev *port, enum pci_bus_speed speed_req, > bool use_lt) > { > + struct pci_host_bridge *host = pci_find_host_bridge(port->bus); > + bool is_rootbus = pci_is_root_bus(port->bus); > struct pci_bus *bus = port->subordinate; > u16 target_speed; > int ret; > @@ -173,6 +175,16 @@ int pcie_set_target_speed(struct pci_dev *port, enum pci_bus_speed speed_req, > > target_speed = pcie_bwctrl_select_speed(port, speed_req); > > + /* > + * The host bridge driver may need to be scaled for targeted speed > + * otherwise link might not come up at requested speed. > + */ > + if (is_rootbus && host->pre_scale_bus_bw) { > + ret = host->pre_scale_bus_bw(host, port, target_speed); > + if (ret) > + return ret; > + } > + > scoped_guard(rwsem_read, &pcie_bwctrl_setspeed_rwsem) { > struct pcie_bwctrl_data *data = port->link_bwctrl; > > @@ -197,6 +209,9 @@ int pcie_set_target_speed(struct pci_dev *port, enum pci_bus_speed speed_req, > !list_empty(&bus->devices)) > ret = -EAGAIN; > > + if (bus && is_rootbus && host->post_scale_bus_bw) > + host->post_scale_bus_bw(host, port, pci_bus_speed2lnkctl2(bus->cur_bus_speed)); > + > return ret; > } > > diff --git a/include/linux/pci.h b/include/linux/pci.h > index 51e2bd6405cda5acc33d268bbe1d491b145e083f..7eb0856ba0ed20bd1336683b68add124c7483902 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -601,6 +601,20 @@ struct pci_host_bridge { > void (*release_fn)(struct pci_host_bridge *); > int (*enable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev); > void (*disable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev); > + /* > + * Callback to the host bridge drivers to update ICC bw votes, clock frequencies etc BW > + * for the link re-train to come up in targeted speed. These are intended to be > + * called by devices directly attached to the root port. These are called by a single Root Port > + * client endpoint driver, so there is no need for explicit locking mechanisms. Endpoint > + */ > + int (*pre_scale_bus_bw)(struct pci_host_bridge *bridge, struct pci_dev *dev, int speed); > + /* > + * Callback to the host bridge drivers to adjust ICC bw votes, clock frequencies etc > + * to the updated speed after link re-train. These are intended to be called by > + * devices directly attached to the root port. These are called by a single client > + * endpoint driver, so there is no need for explicit locking mechanisms. > + */ Please fold comments to 80 characters. > + void (*post_scale_bus_bw)(struct pci_host_bridge *bridge, struct pci_dev *dev, int speed); I still don't like the names. Maybe simply pre/post_link_speed_change would sound more generic. Not a show-stopper but the current name sounds pretty esoteric. > void *release_data; > unsigned int ignore_reset_delay:1; /* For entire hierarchy */ > unsigned int no_ext_tags:1; /* No Extended Tags */ > >
On 5/19/2025 6:39 PM, Ilpo Järvinen wrote: > On Mon, 19 May 2025, Krishna Chaitanya Chundru wrote: > >> If the link is not up till the pwrctl drivers enable power to endpoints >> then cur_bus_speed will not be updated with correct speed. >> >> As part of rescan, pci_pwrctrl_notify() will be called when new devices >> are added and as part of it update the link bus speed. >> >> Suggested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> >> Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com> >> --- >> drivers/pci/pwrctrl/core.c | 5 +++++ >> 1 file changed, 5 insertions(+) >> >> diff --git a/drivers/pci/pwrctrl/core.c b/drivers/pci/pwrctrl/core.c >> index 9cc7e2b7f2b5608ee67c838b6500b2ae4a07ad52..034f0a5d7868fe956e3fc6a9b7ed485bb69caa04 100644 >> --- a/drivers/pci/pwrctrl/core.c >> +++ b/drivers/pci/pwrctrl/core.c >> @@ -10,16 +10,21 @@ >> #include <linux/pci-pwrctrl.h> >> #include <linux/property.h> >> #include <linux/slab.h> >> +#include "../pci.h" >> >> static int pci_pwrctrl_notify(struct notifier_block *nb, unsigned long action, >> void *data) >> { >> struct pci_pwrctrl *pwrctrl = container_of(nb, struct pci_pwrctrl, nb); >> struct device *dev = data; >> + struct pci_bus *bus = to_pci_dev(dev)->bus; >> >> if (dev_fwnode(dev) != dev_fwnode(pwrctrl->dev)) >> return NOTIFY_DONE; >> >> + if (bus->self) >> + pcie_update_link_speed((struct pci_bus *)bus); > > Why are you casting here?? (Perhaps it's a leftover). > yeah it is a leftover I will remove it in next patch. - Krishna Chaitanya. >> + >> switch (action) { >> case BUS_NOTIFY_ADD_DEVICE: >> /* >> >> >
On 5/19/2025 7:11 PM, Ilpo Järvinen wrote: > On Mon, 19 May 2025, Krishna Chaitanya Chundru wrote: > >> If the driver wants to move to higher data rate/speed than the current data >> rate then the controller driver may need to change certain votes so that >> link may come up at requested data rate/speed like QCOM PCIe controllers >> need to change their RPMh (Resource Power Manager-hardened) state. Once >> link retraining is done controller drivers needs to adjust their votes >> based on the final data rate. >> >> Some controllers also may need to update their bandwidth voting like >> ICC bw votings etc. >> >> So, add pre_scale_bus_bw() & post_scale_bus_bw() op to call before & after >> the link re-train. There is no explicit locking mechanisms as these are >> called by a single client endpoint driver. >> >> In case of PCIe switch, if there is a request to change target speed for a >> downstream port then no need to call these function ops as these are >> outside the scope of the controller drivers. >> >> Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com> >> --- >> drivers/pci/pcie/bwctrl.c | 15 +++++++++++++++ >> include/linux/pci.h | 14 ++++++++++++++ >> 2 files changed, 29 insertions(+) >> >> diff --git a/drivers/pci/pcie/bwctrl.c b/drivers/pci/pcie/bwctrl.c >> index d8d2aa85a22928b99c5bba1d2bcc5647c0edeeb6..3525bc0cd10f1dd7794abbe84ccb10e2c53a10af 100644 >> --- a/drivers/pci/pcie/bwctrl.c >> +++ b/drivers/pci/pcie/bwctrl.c >> @@ -161,6 +161,8 @@ static int pcie_bwctrl_change_speed(struct pci_dev *port, u16 target_speed, bool >> int pcie_set_target_speed(struct pci_dev *port, enum pci_bus_speed speed_req, >> bool use_lt) >> { >> + struct pci_host_bridge *host = pci_find_host_bridge(port->bus); >> + bool is_rootbus = pci_is_root_bus(port->bus); >> struct pci_bus *bus = port->subordinate; >> u16 target_speed; >> int ret; >> @@ -173,6 +175,16 @@ int pcie_set_target_speed(struct pci_dev *port, enum pci_bus_speed speed_req, >> >> target_speed = pcie_bwctrl_select_speed(port, speed_req); >> >> + /* >> + * The host bridge driver may need to be scaled for targeted speed >> + * otherwise link might not come up at requested speed. >> + */ >> + if (is_rootbus && host->pre_scale_bus_bw) { >> + ret = host->pre_scale_bus_bw(host, port, target_speed); >> + if (ret) >> + return ret; >> + } >> + >> scoped_guard(rwsem_read, &pcie_bwctrl_setspeed_rwsem) { >> struct pcie_bwctrl_data *data = port->link_bwctrl; >> >> @@ -197,6 +209,9 @@ int pcie_set_target_speed(struct pci_dev *port, enum pci_bus_speed speed_req, >> !list_empty(&bus->devices)) >> ret = -EAGAIN; >> >> + if (bus && is_rootbus && host->post_scale_bus_bw) >> + host->post_scale_bus_bw(host, port, pci_bus_speed2lnkctl2(bus->cur_bus_speed)); >> + >> return ret; >> } >> >> diff --git a/include/linux/pci.h b/include/linux/pci.h >> index 51e2bd6405cda5acc33d268bbe1d491b145e083f..7eb0856ba0ed20bd1336683b68add124c7483902 100644 >> --- a/include/linux/pci.h >> +++ b/include/linux/pci.h >> @@ -601,6 +601,20 @@ struct pci_host_bridge { >> void (*release_fn)(struct pci_host_bridge *); >> int (*enable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev); >> void (*disable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev); >> + /* >> + * Callback to the host bridge drivers to update ICC bw votes, clock frequencies etc > > BW > >> + * for the link re-train to come up in targeted speed. These are intended to be >> + * called by devices directly attached to the root port. These are called by a single > > Root Port > >> + * client endpoint driver, so there is no need for explicit locking mechanisms. > > Endpoint > >> + */ >> + int (*pre_scale_bus_bw)(struct pci_host_bridge *bridge, struct pci_dev *dev, int speed); >> + /* >> + * Callback to the host bridge drivers to adjust ICC bw votes, clock frequencies etc >> + * to the updated speed after link re-train. These are intended to be called by >> + * devices directly attached to the root port. These are called by a single client >> + * endpoint driver, so there is no need for explicit locking mechanisms. >> + */ > > Please fold comments to 80 characters. > >> + void (*post_scale_bus_bw)(struct pci_host_bridge *bridge, struct pci_dev *dev, int speed); > > I still don't like the names. Maybe simply pre/post_link_speed_change > would sound more generic. > Ack, I will change the name in the next patch. - Krishna Chaitanya. > Not a show-stopper but the current name sounds pretty esoteric. > >> void *release_data; >> unsigned int ignore_reset_delay:1; /* For entire hierarchy */ >> unsigned int no_ext_tags:1; /* No Extended Tags */ >> >> >
On 5/21/2025 8:22 PM, Jeffrey Hugo wrote: > On 5/19/2025 3:42 AM, Krishna Chaitanya Chundru wrote: >> From: Vivek Pernamitta <quic_vpernami@quicinc.com> >> >> As per MHI spec v1.2,sec 6.6, MHI has capability registers which are >> located after the ERDB array. The location of this group of registers is >> indicated by the MISCOFF register. Each capability has a capability ID to >> determine which functionality is supported and each capability will point >> to the next capability supported. >> >> Add a basic function to read those capabilities offsets. >> >> Signed-off-by: Vivek Pernamitta <quic_vpernami@quicinc.com> >> Signed-off-by: Krishna Chaitanya Chundru >> <krishna.chundru@oss.qualcomm.com> >> --- >> drivers/bus/mhi/common.h | 4 ++++ >> drivers/bus/mhi/host/init.c | 29 +++++++++++++++++++++++++++++ >> 2 files changed, 33 insertions(+) >> >> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h >> index >> dda340aaed95a5573a2ec776ca712e11a1ed0b52..eedac801b80021e44f7c65d33cd50760e06c02f2 100644 >> --- a/drivers/bus/mhi/common.h >> +++ b/drivers/bus/mhi/common.h >> @@ -16,6 +16,7 @@ >> #define MHICFG 0x10 >> #define CHDBOFF 0x18 >> #define ERDBOFF 0x20 >> +#define MISCOFF 0x24 >> #define BHIOFF 0x28 >> #define BHIEOFF 0x2c >> #define DEBUGOFF 0x30 >> @@ -113,6 +114,9 @@ >> #define MHISTATUS_MHISTATE_MASK GENMASK(15, 8) >> #define MHISTATUS_SYSERR_MASK BIT(2) >> #define MHISTATUS_READY_MASK BIT(0) >> +#define MISC_CAP_MASK GENMASK(31, 0) >> +#define CAP_CAPID_MASK GENMASK(31, 24) >> +#define CAP_NEXT_CAP_MASK GENMASK(23, 12) >> /* Command Ring Element macros */ >> /* No operation command */ >> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c >> index >> 13e7a55f54ff45b83b3f18b97e2cdd83d4836fe3..a7137a040bdce1c58c98fe9c2340aae4cc4387d1 100644 >> --- a/drivers/bus/mhi/host/init.c >> +++ b/drivers/bus/mhi/host/init.c >> @@ -467,6 +467,35 @@ int mhi_init_dev_ctxt(struct mhi_controller >> *mhi_cntrl) >> return ret; >> } >> +static int mhi_find_capability(struct mhi_controller *mhi_cntrl, u32 >> capability, u32 *offset) >> +{ >> + u32 val, cur_cap, next_offset; >> + int ret; >> + >> + /* Get the 1st supported capability offset */ > > "first"? Does not seem like you are short on space here. > Misc register will have the offest of the 1st capability register from there capabilities will have linked list format. >> + ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MISCOFF, >> + MISC_CAP_MASK, offset); > > This fits on one line. > >> + if (ret) >> + return ret; > > Blank line here would be nice. > >> + do { >> + if (*offset >= mhi_cntrl->reg_len) >> + return -ENXIO; >> + >> + ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, *offset, &val); >> + if (ret) >> + return ret; > > > There is no sanity checking we can do on val? We've had plenty of > issues blindly trusting the device. I would like to avoid having more. > we can check if val is not all F's as sanity other than that we can't check any other things as we don't know if the value is valid or not. Let me know if you have any taught on this. > Also looks like if we find the capability we are looking for, we return > the offset without validating it. > For offset I can have a check to make sure the offset is not crossing mhi reg len like above. - Krishna Chaitanya. >> + >> + cur_cap = FIELD_GET(CAP_CAPID_MASK, val); >> + next_offset = FIELD_GET(CAP_NEXT_CAP_MASK, val); >> + if (cur_cap == capability) >> + return 0; >> + >> + *offset = next_offset; >> + } while (next_offset); >> + >> + return -ENXIO; >> +} >> + >> int mhi_init_mmio(struct mhi_controller *mhi_cntrl) >> { >> u32 val; >> >
As per MHI spec sec 14, MHI supports bandwidth scaling to reduce power consumption. MHI bandwidth scaling is advertised in devices that contain the bandwidth scaling capability registers. If enabled, the device aggregates bandwidth requirements and sends them to the host in the form of an event. After the host performs the bandwidth switch, it sends an acknowledgment by ringing a doorbell. if the host supports bandwidth scaling events, then it must set BW_CFG.ENABLED bit, set BW_CFG.DB_CHAN_ID to the channel ID to the doorbell that will be used by the host to communicate the bandwidth scaling status and BW_CFG.ER_INDEX to the index for the event ring to which the device should send bandwidth scaling request in the bandwidth scaling capability register. As part of mmio init check if the bw scale capability is present or not, if present advertise host supports bw scale by setting all the required fields. MHI layer will only forward the bw scaling request to the controller driver, it is responsibility of the controller driver to do actual bw scaling and then pass status to the MHI. MHI will response back to the device based up on the status of the bw scale received. Add a new get_misc_doorbell() to get doorbell for misc capabilities to use the doorbell with mhi events like MHI BW scale etc. Use workqueue & mutex for the bw scale events as the pci_set_target_speed() which will called by the mhi controller driver can sleep. If the driver want to move higher data rate/speed then the current data rate/speed then the controller driver may need to change certain votes so that link may come up in requested data rate/speed like QCOM PCIe controllers need to change their RPMh (Resource Power Manager-hardened) state. And also once link retraining is done controller drivers needs to adjust their votes based on the final data rate/speed. Some controllers also may need to update their bandwidth voting like ICC bw votings etc. So, add pre_scale_bus_bw() & post_scale_bus_bw() op to call before & after the link re-train. There is no explicit locking mechanisms as these are called by a single client endpoint driver In case of PCIe switch, if there is a request to change target speed for a downstream port then no need to call these function ops as these are outside the scope of the controller drivers. Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com> --- Changes in v3: - Move update speed logic to pwrctrl driver (Mani) - Move pre_bus_bw & post_bus_bw to bridge as these are bridge driver specific ops, it feels to me we need to add these in the host bridge driver similar to recently added one reset_slot. - Remove dwc level wrapper (Mani) - Enable ASPM only if they are enabled already (Mani) - Change the name of mhi_get_capability_offset to mhi_find_capability() (Bjorn) - Fix comments in the code, subjects etc (Mani & Bjorn) - Link to v2: https://lore.kernel.org/r/20250313-mhi_bw_up-v2-0-869ca32170bf@oss.qualcomm.com Changes in v2: - Update the comments. - Split the icc bw patch as sepertate one (Bjorn) - update the aspm disablement comment (Bjorn) - Use FIELD_GET & FIELD_PREP instead of hard macros and couple of nits suggested by (Ilpo Järvinen) - Create a new function to change lnkcntrl2speed to enum pci_bus_speed (Jeff) - Link to v1: https://lore.kernel.org/r/20250217-mhi_bw_up-v1-0-9bad1e42bdb1@oss.qualcomm.com --- Krishna Chaitanya Chundru (9): PCI: Update current bus speed as part of pci_pwrctrl_notify() PCI/bwctrl: Add support to scale bandwidth before & after link re-training PCI/ASPM: Return enabled ASPM states as part of pcie_aspm_enabled() PCI/ASPM: Clear aspm_disable as part of __pci_enable_link_state() PCI: qcom: Extract core logic from qcom_pcie_icc_opp_update() PCI: qcom: Add support for PCIe bus bw scaling bus: mhi: host: Add support for Bandwidth scale PCI: Export pci_set_target_speed() PCI: Add function to convert lnkctl2speed to pci_bus_speed Miaoqing Pan (1): wifi: ath11k: Add support for MHI bandwidth scaling Vivek Pernamitta (1): bus: mhi: host: Add support to read MHI capabilities drivers/bus/mhi/common.h | 20 ++++++ drivers/bus/mhi/host/init.c | 90 +++++++++++++++++++++++- drivers/bus/mhi/host/internal.h | 7 +- drivers/bus/mhi/host/main.c | 98 +++++++++++++++++++++++++- drivers/bus/mhi/host/pm.c | 10 ++- drivers/net/wireless/ath/ath11k/mhi.c | 41 +++++++++++ drivers/pci/controller/dwc/pcie-qcom.c | 124 ++++++++++++++++++++++++++------- drivers/pci/pci.c | 12 ++++ drivers/pci/pcie/aspm.c | 5 +- drivers/pci/pcie/bwctrl.c | 16 +++++ drivers/pci/pwrctrl/core.c | 5 ++ include/linux/mhi.h | 13 ++++ include/linux/pci.h | 19 ++++- 13 files changed, 425 insertions(+), 35 deletions(-) --- base-commit: fee3e843b309444f48157e2188efa6818bae85cf change-id: 20250217-mhi_bw_up-f31306a5631b Best regards,