mbox series

[0/3] i2c: qup: Allow scaling power domains and interconnect

Message ID 20231128-i2c-qup-dvfs-v1-0-59a0e3039111@kernkonzept.com
Headers show
Series i2c: qup: Allow scaling power domains and interconnect | expand

Message

Stephan Gerhold Nov. 28, 2023, 9:48 a.m. UTC
Make it possible to scale performance states of the power domain and
interconnect of the I2C QUP controller.

This is necessary to guarantee performance with power management
enabled. Otherwise these resources might run at minimal performance
state which is not sufficient for certain workloads.

Signed-off-by: Stephan Gerhold <stephan.gerhold@kernkonzept.com>
---
Stephan Gerhold (3):
      dt-bindings: i2c: qcom,i2c-qup: Document power-domains
      dt-bindings: i2c: qup: Document interconnects
      i2c: qup: Vote for interconnect bandwidth to DRAM

 .../devicetree/bindings/i2c/qcom,i2c-qup.yaml      | 14 +++++++++
 drivers/i2c/busses/i2c-qup.c                       | 36 ++++++++++++++++++++++
 2 files changed, 50 insertions(+)
---
base-commit: b85ea95d086471afb4ad062012a4d73cd328fa86
change-id: 20231106-i2c-qup-dvfs-bc60e2998dd8

Best regards,

Comments

Andi Shyti Feb. 18, 2025, 11:13 p.m. UTC | #1
Sorry for replying to my own mail, bu I needed to fix Stephan and
Konrad's emails.

On Wed, Feb 19, 2025 at 12:02:11AM +0100, Andi Shyti wrote:
> Hi Stephen,
> 
> sorry for the very late reply here. Just one question.
> 
> ...
> 
> > downstream/vendor driver [1]. Due to lack of documentation about the
> > interconnect setup/behavior I cannot say exactly if this is right.
> > Unfortunately, this is not implemented very consistently downstream...
> 
> Can we have someone from Qualcomm or Linaro taking a peak here?
> 
> > [1]: https://git.codelinaro.org/clo/la/kernel/msm-3.10/-/commit/67174e2624ea64814231e7e1e4af83fd882302c6
> 
> ...
> 
> > @@ -1745,6 +1775,11 @@ static int qup_i2c_probe(struct platform_device *pdev)
> >  			goto fail_dma;
> >  		}
> >  		qup->is_dma = true;
> > +
> > +		qup->icc_path = devm_of_icc_get(&pdev->dev, NULL);
> > +		if (IS_ERR(qup->icc_path))
> > +			return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path),
> > +					     "failed to get interconnect path\n");
> 
> Can we live without it if it fails?
> 
> Thanks,
> Andi
Krzysztof Kozlowski Feb. 19, 2025, 7 a.m. UTC | #2
On 19/02/2025 00:02, Andi Shyti wrote:
> Hi Stephen,
> 
> sorry for the very late reply here. Just one question.
> 
> ...
> 
>> downstream/vendor driver [1]. Due to lack of documentation about the
>> interconnect setup/behavior I cannot say exactly if this is right.
>> Unfortunately, this is not implemented very consistently downstream...
> 
> Can we have someone from Qualcomm or Linaro taking a peak here?

You replied to some old email, not in my inbox anymore, but your quote
lacks standard quote-template, like:

	On 19/02/2025 00:02, Andi Shyti wrote:

so I really don't know when was it sent. For sure more than a month ago,
maybe more? This has to be resent if you want anything done here.

Best regards,
Krzysztof
Stephan Gerhold Feb. 19, 2025, 10:40 a.m. UTC | #3
Hi Andi,

On Wed, Feb 19, 2025 at 12:02:06AM +0100, Andi Shyti wrote:
> 
> sorry for the very late reply here. Just one question.
> 

Thanks for bringing the patch back up after such a long time. I've been
meaning to resend it, but never found the time to do so... :-)

> 
> > downstream/vendor driver [1]. Due to lack of documentation about the
> > interconnect setup/behavior I cannot say exactly if this is right.
> > Unfortunately, this is not implemented very consistently downstream...
> 
> Can we have someone from Qualcomm or Linaro taking a peak here?
> 

I suppose I count as someone from Linaro nowadays. However, since this
driver is only used on really old platforms nowadays, I'm not sure where
to look or who to ask...

At the end, the whole bus scaling/interconnect is always somewhat
"imprecise". There is no clear "correct" or "wrong", since the ideal
bandwidth depends heavily on the actual use case that we are not aware
of in the driver. There is also overhead when voting for bandwidth,
since that can take a couple of milliseconds.

The most important part is that we vote for any bandwidth at all, since
otherwise the bus path could potentially be completely off and it would
get stuck. My patch implements one of the approaches that was used in
the downstream/vendor drivers and matches what we already have upstream
in the corresponding spi-qup driver. I think it's "good enough". If
someone ever wants to fine tune this based on actual measurements they
can just submit an incremental patch. Right now this series is blocking
adding the necessary properties in the device tree and that's not good.

Surprisingly this series still applies cleanly on top of linux-next. The
dt-bindings have review tags and there was plenty of time for someone
else to chime in for the driver. So maybe you can just pick them up? :D

> > [1]: https://git.codelinaro.org/clo/la/kernel/msm-3.10/-/commit/67174e2624ea64814231e7e1e4af83fd882302c6
> 
> ...
> 
> > @@ -1745,6 +1775,11 @@ static int qup_i2c_probe(struct platform_device *pdev)
> >  			goto fail_dma;
> >  		}
> >  		qup->is_dma = true;
> > +
> > +		qup->icc_path = devm_of_icc_get(&pdev->dev, NULL);
> > +		if (IS_ERR(qup->icc_path))
> > +			return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path),
> > +					     "failed to get interconnect path\n");
> 
> Can we live without it if it fails?
> 

of_icc_get() returns NULL if the interconnect API is disabled, or if
"interconnects" is not defined in the device tree, so this is already
handled. If "interconnects" is enabled and defined, I think we shouldn't
ignore errors. Therefore, this should work as intended.

Let me know if I should resend the patch or if you can apply it
directly.

Thanks,
Stephan
Andi Shyti Feb. 19, 2025, 7:30 p.m. UTC | #4
Hi Stephan,

On Wed, Feb 19, 2025 at 11:40:16AM +0100, Stephan Gerhold wrote:
> Hi Andi,
> 
> On Wed, Feb 19, 2025 at 12:02:06AM +0100, Andi Shyti wrote:
> > 
> > sorry for the very late reply here. Just one question.
> > 
> 
> Thanks for bringing the patch back up after such a long time. I've been
> meaning to resend it, but never found the time to do so... :-)

We have a long list of forgotten patches that belong to the far
past. I'm trying to revive them.

> > > downstream/vendor driver [1]. Due to lack of documentation about the
> > > interconnect setup/behavior I cannot say exactly if this is right.
> > > Unfortunately, this is not implemented very consistently downstream...
> > 
> > Can we have someone from Qualcomm or Linaro taking a peak here?
> > 
> 
> I suppose I count as someone from Linaro nowadays. However, since this
> driver is only used on really old platforms nowadays, I'm not sure where
> to look or who to ask...
> 
> At the end, the whole bus scaling/interconnect is always somewhat
> "imprecise". There is no clear "correct" or "wrong", since the ideal
> bandwidth depends heavily on the actual use case that we are not aware
> of in the driver. There is also overhead when voting for bandwidth,
> since that can take a couple of milliseconds.
> 
> The most important part is that we vote for any bandwidth at all, since
> otherwise the bus path could potentially be completely off and it would
> get stuck. My patch implements one of the approaches that was used in
> the downstream/vendor drivers and matches what we already have upstream
> in the corresponding spi-qup driver. I think it's "good enough". If
> someone ever wants to fine tune this based on actual measurements they
> can just submit an incremental patch. Right now this series is blocking
> adding the necessary properties in the device tree and that's not good.
> 
> Surprisingly this series still applies cleanly on top of linux-next. The
> dt-bindings have review tags and there was plenty of time for someone
> else to chime in for the driver. So maybe you can just pick them up? :D

Yes, I already tested them.

> > > [1]: https://git.codelinaro.org/clo/la/kernel/msm-3.10/-/commit/67174e2624ea64814231e7e1e4af83fd882302c6
> > 
> > ...
> > 
> > > @@ -1745,6 +1775,11 @@ static int qup_i2c_probe(struct platform_device *pdev)
> > >  			goto fail_dma;
> > >  		}
> > >  		qup->is_dma = true;
> > > +
> > > +		qup->icc_path = devm_of_icc_get(&pdev->dev, NULL);
> > > +		if (IS_ERR(qup->icc_path))
> > > +			return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path),
> > > +					     "failed to get interconnect path\n");
> > 
> > Can we live without it if it fails?
> > 
> 
> of_icc_get() returns NULL if the interconnect API is disabled, or if
> "interconnects" is not defined in the device tree, so this is already
> handled. If "interconnects" is enabled and defined, I think we shouldn't
> ignore errors. Therefore, this should work as intended.

yes, because qup_i2c_vote_bw() checks inside for NULL values.

My idea was that:

	if (IS_ERR(...)) {
		dev_warn(...)
		qup->icc_path = NULL;
	}

and let things work. Anyway, if you want to keep it this way,
fine with me, I don't have a strong opinion, rather than a
preference to keep going.

Thanks,
Andi

> Let me know if I should resend the patch or if you can apply it
> directly.
> 
> Thanks,
> Stephan
Andi Shyti Feb. 19, 2025, 7:36 p.m. UTC | #5
Hi Krzysztof,

On Wed, Feb 19, 2025 at 08:00:25AM +0100, Krzysztof Kozlowski wrote:
> On 19/02/2025 00:02, Andi Shyti wrote:
> > sorry for the very late reply here. Just one question.
> > 
> > ...
> > 
> >> downstream/vendor driver [1]. Due to lack of documentation about the
> >> interconnect setup/behavior I cannot say exactly if this is right.
> >> Unfortunately, this is not implemented very consistently downstream...
> > 
> > Can we have someone from Qualcomm or Linaro taking a peak here?
> 
> You replied to some old email, not in my inbox anymore,

feeling nostalgic :-)

> but your quote
> lacks standard quote-template, like:
> 
> 	On 19/02/2025 00:02, Andi Shyti wrote:

I'm strictly following RFC-1855, but you're right I removed a bit
too much to lose time reference.

> so I really don't know when was it sent. For sure more than a month ago,
> maybe more? This has to be resent if you want anything done here.

It was sent on "Tue, 28 Nov 2023 10:48:34 +0100", definitely more
than a month ago, I'm also surprised to have it in my inbox. But
it still applies cleanly.

Perhaps a resend can invite people for more reviews. I don't
mind.

Thanks,
Andi
Stephan Gerhold Feb. 20, 2025, 9:47 a.m. UTC | #6
On Wed, Feb 19, 2025 at 08:30:35PM +0100, Andi Shyti wrote:
> On Wed, Feb 19, 2025 at 11:40:16AM +0100, Stephan Gerhold wrote:
> > On Wed, Feb 19, 2025 at 12:02:06AM +0100, Andi Shyti wrote:
> > > 
> > > sorry for the very late reply here. Just one question.
> > > 
> > 
> > Thanks for bringing the patch back up after such a long time. I've been
> > meaning to resend it, but never found the time to do so... :-)
> 
> We have a long list of forgotten patches that belong to the far
> past. I'm trying to revive them.
> 

Thanks, this is much appreciated!

> [...]
> > > > @@ -1745,6 +1775,11 @@ static int qup_i2c_probe(struct platform_device *pdev)
> > > >  			goto fail_dma;
> > > >  		}
> > > >  		qup->is_dma = true;
> > > > +
> > > > +		qup->icc_path = devm_of_icc_get(&pdev->dev, NULL);
> > > > +		if (IS_ERR(qup->icc_path))
> > > > +			return dev_err_probe(&pdev->dev, PTR_ERR(qup->icc_path),
> > > > +					     "failed to get interconnect path\n");
> > > 
> > > Can we live without it if it fails?
> > > 
> > 
> > of_icc_get() returns NULL if the interconnect API is disabled, or if
> > "interconnects" is not defined in the device tree, so this is already
> > handled. If "interconnects" is enabled and defined, I think we shouldn't
> > ignore errors. Therefore, this should work as intended.
> 
> yes, because qup_i2c_vote_bw() checks inside for NULL values.
> 
> My idea was that:
> 
> 	if (IS_ERR(...)) {
> 		dev_warn(...)
> 		qup->icc_path = NULL;
> 	}
> 
> and let things work. Anyway, if you want to keep it this way,
> fine with me, I don't have a strong opinion, rather than a
> preference to keep going.

I would prefer to keep it the way it is. It's okay to omit the
"interconnects" in the DT (either for old device trees, or because you
don't define the "dmas" either). But if they are defined, we should not
be ignoring errors. -EPROBE_DEFER definitely needs to be handled, but
even for -EINVAL or similar it would be better to make it obvious in my
opinion.

None of the existing users should be affected, since no one defines
"interconnects" at the moment.

Thanks,
Stephan
Konrad Dybcio Feb. 25, 2025, 1:25 p.m. UTC | #7
On 19.02.2025 11:40 AM, Stephan Gerhold wrote:
> Hi Andi,
> 
> On Wed, Feb 19, 2025 at 12:02:06AM +0100, Andi Shyti wrote:
>>
>> sorry for the very late reply here. Just one question.
>>
> 
> Thanks for bringing the patch back up after such a long time. I've been
> meaning to resend it, but never found the time to do so... :-)
> 
>>
>>> downstream/vendor driver [1]. Due to lack of documentation about the
>>> interconnect setup/behavior I cannot say exactly if this is right.
>>> Unfortunately, this is not implemented very consistently downstream...
>>
>> Can we have someone from Qualcomm or Linaro taking a peak here?
>>
> 
> I suppose I count as someone from Linaro nowadays. However, since this
> driver is only used on really old platforms nowadays, I'm not sure where
> to look or who to ask...
> 
> At the end, the whole bus scaling/interconnect is always somewhat
> "imprecise". There is no clear "correct" or "wrong", since the ideal
> bandwidth depends heavily on the actual use case that we are not aware
> of in the driver. There is also overhead when voting for bandwidth,
> since that can take a couple of milliseconds.
> 
> The most important part is that we vote for any bandwidth at all, since
> otherwise the bus path could potentially be completely off and it would
> get stuck. My patch implements one of the approaches that was used in
> the downstream/vendor drivers and matches what we already have upstream
> in the corresponding spi-qup driver. I think it's "good enough". If
> someone ever wants to fine tune this based on actual measurements they
> can just submit an incremental patch. Right now this series is blocking
> adding the necessary properties in the device tree and that's not good.

Yeah, the throughput of an I2C controller isn't even very likely to affect
the total bus frequency requirement, although it's a strict requirement
that the requested bw is nonzero (otherwise the bus may be clock-gated)

Konrad
Andi Shyti Feb. 26, 2025, 10:11 p.m. UTC | #8
Hi Stephen,

On Tue, Nov 28, 2023 at 10:48:34AM +0100, Stephan Gerhold wrote:
> Make it possible to scale performance states of the power domain and
> interconnect of the I2C QUP controller.
> 
> This is necessary to guarantee performance with power management
> enabled. Otherwise these resources might run at minimal performance
> state which is not sufficient for certain workloads.
> 
> Signed-off-by: Stephan Gerhold <stephan.gerhold@kernkonzept.com>

merged to i2c/i2c-host.

Thanks,
Andi