Message ID | 20231117113931.26660-1-quic_sibis@quicinc.com |
---|---|
Headers | show |
Series | dts: qcom: Introduce X1E80100 platforms device tree | expand |
On 17.11.2023 12:39, Sibi Sankar wrote: > Add basic support for X1E80100 CRD board dts, which allows it to boot > to a shell. > > Signed-off-by: Abel Vesa <abel.vesa@linaro.org> > Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com> > Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> > --- pretty much just the same question about pins <34 2> otherwise Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Konrad
On 11/20/23 12:14, Sibi Sankar wrote: > Hey Rob, > > Thanks for taking time to review the series. > > On 11/19/23 21:29, Rob Herring wrote: >> On Fri, Nov 17, 2023 at 05:09:27PM +0530, Sibi Sankar wrote: >>> From: Rajendra Nayak <quic_rjendra@quicinc.com> >>> >>> These are the CPU cores in Qualcomm's X1E80100 SoC. >>> >>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com> >>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>> --- >>> >>> v2: >>> * Update the part number from sc8380xp to x1e80100. >>> >>> Documentation/devicetree/bindings/arm/cpus.yaml | 1 + >>> 1 file changed, 1 insertion(+) >>> >>> diff --git a/Documentation/devicetree/bindings/arm/cpus.yaml >>> b/Documentation/devicetree/bindings/arm/cpus.yaml >>> index ffd526363fda..cc5a21b47e26 100644 >>> --- a/Documentation/devicetree/bindings/arm/cpus.yaml >>> +++ b/Documentation/devicetree/bindings/arm/cpus.yaml >>> @@ -198,6 +198,7 @@ properties: >>> - qcom,kryo660 >>> - qcom,kryo685 >>> - qcom,kryo780 >>> + - qcom,oryon >> >> Wasn't it previously said 'oryon' is not specific enough? https://lore.kernel.org/lkml/b165d2cd-e8da-4f6d-9ecf-14df2b803614@linaro.org/ The cpu part numbers were indeed different in engineering samples which has now been fixed in the production version. -Sibi >> >> Also, please describe what oryon is in the commit msg. > > ack. Will add more details in the next re-spin. > > -Sibi > >> >> Rob
On 11/29/23 18:24, Konrad Dybcio wrote: > On 29.11.2023 10:25, Sibi Sankar wrote: >> >> >> On 11/18/23 06:36, Konrad Dybcio wrote: >>> On 17.11.2023 12:39, Sibi Sankar wrote: >>>> From: Rajendra Nayak <quic_rjendra@quicinc.com> >>>> >>>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for >>>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers, >>>> geni UART, interrupt controller, TLMM, reserved memory, interconnects, >>>> SMMU and LLCC nodes. >>>> >>>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org> >>>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org> >>>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com> >>>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com> >>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>> --- > [...] > > >>>> + idle-states { >>>> + entry-method = "psci"; >>>> + >>>> + CLUSTER_C4: cpu-sleep-0 { >>>> + compatible = "arm,idle-state"; >>>> + idle-state-name = "ret"; >>>> + arm,psci-suspend-param = <0x00000004>; >>> These suspend parameters look funky.. is this just a PSCI sleep >>> implementation that strays far away from Arm's suggested guidelines? >> >> not really! it's just that 30th bit is set according to spec i.e >> it's marked as a retention state. > So, is there no state where the cores actually power down? Or is it > not described yet? > > FWIW by "power down" I mean it in the sense that Arm DEN0022D does, > so "In this state the core is powered off. Software on the device > needs to save all core state, so that it can be preserved over > the powerdown." I was told we mark it explicitly as retention because hw is expected to handle powerdown and we don't want sw to also do the same. > >> >>> >>> [...] >>> >>> >>>> + CPU_PD11: power-domain-cpu11 { >>>> + #power-domain-cells = <0>; >>>> + power-domains = <&CLUSTER_PD>; >>>> + }; >>>> + >>>> + CLUSTER_PD: power-domain-cpu-cluster { >>>> + #power-domain-cells = <0>; >>>> + domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>; >>>> + }; >>> So, can the 3 clusters not shut down their L2 and PLLs (if separate?) >>> on their own? >> >> on CL5 the clusters are expected to shutdown their l2 and PLL on their >> own. > Then I think this won't happen with this description > > every cpu has a genpd tree like this: > > cpu_n > |_CPU_PDn > |_CLUSTER_PD > > and CLUSTER_PD has two idle states: CLUSTER_CL4 and CLUSTER_CL5 > > which IIUC means that neither cluster idle state will be reached > unless all children of CLUSTER_PD (so, all CPUs) go down that low > > This is "fine" on e.g. sc8280 where both CPU clusters are part of > the same Arm DynamIQ cluster (which is considered one cluster as > far as MPIDR_EL1 goes) (though perhaps that's misleading and with > the qcom plumbing they perhaps could actually be collapsed separately) We did verify that the sleep stats increase independently for each cluster, so it's behavior is unlike what you explained above. I'll re-spin this series again in the meantime and you can take another stab at it there. -Sibi > > Konrad
On 29.11.2023 16:46, Sibi Sankar wrote: > > > On 11/29/23 18:24, Konrad Dybcio wrote: >> On 29.11.2023 10:25, Sibi Sankar wrote: >>> >>> >>> On 11/18/23 06:36, Konrad Dybcio wrote: >>>> On 17.11.2023 12:39, Sibi Sankar wrote: >>>>> From: Rajendra Nayak <quic_rjendra@quicinc.com> >>>>> >>>>> Add base dtsi and QCP board (Qualcomm Compute Platform) dts file for >>>>> X1E80100 SoC, describing the CPUs, GCC and RPMHCC clock controllers, >>>>> geni UART, interrupt controller, TLMM, reserved memory, interconnects, >>>>> SMMU and LLCC nodes. >>>>> >>>>> Co-developed-by: Abel Vesa <abel.vesa@linaro.org> >>>>> Signed-off-by: Abel Vesa <abel.vesa@linaro.org> >>>>> Signed-off-by: Rajendra Nayak <quic_rjendra@quicinc.com> >>>>> Co-developed-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>> Signed-off-by: Sibi Sankar <quic_sibis@quicinc.com> >>>>> --- >> [...] >> >> >>>>> + idle-states { >>>>> + entry-method = "psci"; >>>>> + >>>>> + CLUSTER_C4: cpu-sleep-0 { >>>>> + compatible = "arm,idle-state"; >>>>> + idle-state-name = "ret"; >>>>> + arm,psci-suspend-param = <0x00000004>; >>>> These suspend parameters look funky.. is this just a PSCI sleep >>>> implementation that strays far away from Arm's suggested guidelines? >>> >>> not really! it's just that 30th bit is set according to spec i.e >>> it's marked as a retention state. >> So, is there no state where the cores actually power down? Or is it >> not described yet? >> >> FWIW by "power down" I mean it in the sense that Arm DEN0022D does, >> so "In this state the core is powered off. Software on the device >> needs to save all core state, so that it can be preserved over >> the powerdown." > > I was told we mark it explicitly as retention because hw is expected > to handle powerdown and we don't want sw to also do the same. > >> >>> >>>> >>>> [...] >>>> >>>> >>>>> + CPU_PD11: power-domain-cpu11 { >>>>> + #power-domain-cells = <0>; >>>>> + power-domains = <&CLUSTER_PD>; >>>>> + }; >>>>> + >>>>> + CLUSTER_PD: power-domain-cpu-cluster { >>>>> + #power-domain-cells = <0>; >>>>> + domain-idle-states = <&CLUSTER_CL4>, <&CLUSTER_CL5>; >>>>> + }; >>>> So, can the 3 clusters not shut down their L2 and PLLs (if separate?) >>>> on their own? >>> >>> on CL5 the clusters are expected to shutdown their l2 and PLL on their >>> own. >> Then I think this won't happen with this description >> >> every cpu has a genpd tree like this: >> >> cpu_n >> |_CPU_PDn >> |_CLUSTER_PD >> >> and CLUSTER_PD has two idle states: CLUSTER_CL4 and CLUSTER_CL5 >> >> which IIUC means that neither cluster idle state will be reached >> unless all children of CLUSTER_PD (so, all CPUs) go down that low >> >> This is "fine" on e.g. sc8280 where both CPU clusters are part of >> the same Arm DynamIQ cluster (which is considered one cluster as >> far as MPIDR_EL1 goes) (though perhaps that's misleading and with >> the qcom plumbing they perhaps could actually be collapsed separately) > > We did verify that the sleep stats increase independently for each > cluster, so it's behavior is unlike what you explained above. I'll > re-spin this series again in the meantime and you can take another > stab at it there. So are you saying that you checked the RPMh sleep stats and each cluster managed to sleep on its own, or did you do something different? Were the sleep durations far apart? What's the order of magnitude of that difference? Are the values reported in RPMh greater than those in /sys/kernel/debug/pm_genpd/power-domain-cpu-cluster/total_idle_time? Is there any other (i.e. non-Linux) source of "go to sleep" votes? Konrad