diff mbox series

[v5,1/2] dt-bindings: cpufreq: add virtual cpufreq device

Message ID 20240127004321.1902477-2-davidai@google.com
State New
Headers show
Series Improve VM CPUfreq and task placement behavior | expand

Commit Message

David Dai Jan. 27, 2024, 12:43 a.m. UTC
Adding bindings to represent a virtual cpufreq device.

Virtual machines may expose MMIO regions for a virtual cpufreq device
for guests to read frequency information or to request frequency
selection. The virtual cpufreq device has an individual controller for
each frequency domain. Performance points for a given domain can be
normalized across all domains for ease of allowing for virtual machines
to migrate between hosts.

Co-developed-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: Saravana Kannan <saravanak@google.com>
Signed-off-by: David Dai <davidai@google.com>
---
 .../cpufreq/qemu,cpufreq-virtual.yaml         | 110 ++++++++++++++++++
 1 file changed, 110 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/cpufreq/qemu,cpufreq-virtual.yaml

Comments

Saravana Kannan Jan. 31, 2024, 6:23 p.m. UTC | #1
On Wed, Jan 31, 2024 at 9:06 AM Rob Herring <robh@kernel.org> wrote:
>
> On Fri, Jan 26, 2024 at 04:43:15PM -0800, David Dai wrote:
> > Adding bindings to represent a virtual cpufreq device.
> >
> > Virtual machines may expose MMIO regions for a virtual cpufreq device
> > for guests to read frequency information or to request frequency
> > selection. The virtual cpufreq device has an individual controller for
> > each frequency domain. Performance points for a given domain can be
> > normalized across all domains for ease of allowing for virtual machines
> > to migrate between hosts.
> >
> > Co-developed-by: Saravana Kannan <saravanak@google.com>
> > Signed-off-by: Saravana Kannan <saravanak@google.com>
> > Signed-off-by: David Dai <davidai@google.com>
> > ---
> >  .../cpufreq/qemu,cpufreq-virtual.yaml         | 110 ++++++++++++++++++
>
> > +    const: qemu,virtual-cpufreq
>
> Well, the filename almost matches the compatible.
>
> > +
> > +  reg:
> > +    maxItems: 1
> > +    description:
> > +      Address and size of region containing frequency controls for each of the
> > +      frequency domains. Regions for each frequency domain is placed
> > +      contiguously and contain registers for controlling DVFS(Dynamic Frequency
> > +      and Voltage) characteristics. The size of the region is proportional to
> > +      total number of frequency domains. This device also needs the CPUs to
> > +      list their OPPs using operating-points-v2 tables. The OPP tables for the
> > +      CPUs should use normalized "frequency" values where the OPP with the
> > +      highest performance among all the vCPUs is listed as 1024 KHz. The rest
> > +      of the frequencies of all the vCPUs should be normalized based on their
> > +      performance relative to that 1024 KHz OPP. This makes it much easier to
> > +      migrate the VM across systems which might have different physical CPU
> > +      OPPs.
> > +
> > +required:
> > +  - compatible
> > +  - reg
> > +
> > +additionalProperties: false
> > +
> > +examples:
> > +  - |
> > +    // This example shows a two CPU configuration with a frequency domain
> > +    // for each CPU showing normalized performance points.
> > +    cpus {
> > +      #address-cells = <1>;
> > +      #size-cells = <0>;
> > +
> > +      cpu@0 {
> > +        compatible = "arm,armv8";
> > +        device_type = "cpu";
> > +        reg = <0x0>;
> > +        operating-points-v2 = <&opp_table0>;
> > +      };
> > +
> > +      cpu@1 {
> > +        compatible = "arm,armv8";
> > +        device_type = "cpu";
> > +        reg = <0x0>;
> > +        operating-points-v2 = <&opp_table1>;
> > +      };
> > +    };
> > +
> > +    opp_table0: opp-table-0 {
> > +      compatible = "operating-points-v2";
> > +
> > +      opp64000 { opp-hz = /bits/ 64 <64000>; };
>
> opp-64000 is the preferred form.
>
> > +      opp128000 { opp-hz = /bits/ 64 <128000>; };
> > +      opp192000 { opp-hz = /bits/ 64 <192000>; };
> > +      opp256000 { opp-hz = /bits/ 64 <256000>; };
> > +      opp320000 { opp-hz = /bits/ 64 <320000>; };
> > +      opp384000 { opp-hz = /bits/ 64 <384000>; };
> > +      opp425000 { opp-hz = /bits/ 64 <425000>; };
> > +    };
> > +
> > +    opp_table1: opp-table-1 {
> > +      compatible = "operating-points-v2";
> > +
> > +      opp64000 { opp-hz = /bits/ 64 <64000>; };
> > +      opp128000 { opp-hz = /bits/ 64 <128000>; };
> > +      opp192000 { opp-hz = /bits/ 64 <192000>; };
> > +      opp256000 { opp-hz = /bits/ 64 <256000>; };
> > +      opp320000 { opp-hz = /bits/ 64 <320000>; };
> > +      opp384000 { opp-hz = /bits/ 64 <384000>; };
> > +      opp448000 { opp-hz = /bits/ 64 <448000>; };
> > +      opp512000 { opp-hz = /bits/ 64 <512000>; };
> > +      opp576000 { opp-hz = /bits/ 64 <576000>; };
> > +      opp640000 { opp-hz = /bits/ 64 <640000>; };
> > +      opp704000 { opp-hz = /bits/ 64 <704000>; };
> > +      opp768000 { opp-hz = /bits/ 64 <768000>; };
> > +      opp832000 { opp-hz = /bits/ 64 <832000>; };
> > +      opp896000 { opp-hz = /bits/ 64 <896000>; };
> > +      opp960000 { opp-hz = /bits/ 64 <960000>; };
> > +      opp1024000 { opp-hz = /bits/ 64 <1024000>; };
> > +
> > +    };
>
> I don't recall your prior versions having an OPP table. Maybe it was
> incomplete. You are designing the "h/w" interface. Why don't you make it
> discoverable or implicit (fixed for the h/w)?

We also need the OPP tables to indicate which CPUs are part of the
same cluster, etc. Don't want to invent a new "protocol" and just use
existing DT bindings.

> Do you really need it if the frequency is normalized?

Yeah, we can have little and big CPUs and want to emulate different
performance levels. So while the Fmax on big is 1024, we still want to
be able to say little is 425. So we definitely need frequency tables.

> Also, we have "opp-level" for opaque values that aren't Hz.

Still want to keep it Hz to be compatible with arch_freq_scale and
when virtualized CPU perf counters are available.

-Saravana
Rob Herring Feb. 2, 2024, 3:53 p.m. UTC | #2
On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> On Wed, Jan 31, 2024 at 9:06 AM Rob Herring <robh@kernel.org> wrote:
> >
> > On Fri, Jan 26, 2024 at 04:43:15PM -0800, David Dai wrote:
> > > Adding bindings to represent a virtual cpufreq device.
> > >
> > > Virtual machines may expose MMIO regions for a virtual cpufreq device
> > > for guests to read frequency information or to request frequency
> > > selection. The virtual cpufreq device has an individual controller for
> > > each frequency domain. Performance points for a given domain can be
> > > normalized across all domains for ease of allowing for virtual machines
> > > to migrate between hosts.
> > >
> > > Co-developed-by: Saravana Kannan <saravanak@google.com>
> > > Signed-off-by: Saravana Kannan <saravanak@google.com>
> > > Signed-off-by: David Dai <davidai@google.com>
> > > ---
> > >  .../cpufreq/qemu,cpufreq-virtual.yaml         | 110 ++++++++++++++++++
> >
> > > +    const: qemu,virtual-cpufreq
> >
> > Well, the filename almost matches the compatible.
> >
> > > +
> > > +  reg:
> > > +    maxItems: 1
> > > +    description:
> > > +      Address and size of region containing frequency controls for each of the
> > > +      frequency domains. Regions for each frequency domain is placed
> > > +      contiguously and contain registers for controlling DVFS(Dynamic Frequency
> > > +      and Voltage) characteristics. The size of the region is proportional to
> > > +      total number of frequency domains. This device also needs the CPUs to
> > > +      list their OPPs using operating-points-v2 tables. The OPP tables for the
> > > +      CPUs should use normalized "frequency" values where the OPP with the
> > > +      highest performance among all the vCPUs is listed as 1024 KHz. The rest
> > > +      of the frequencies of all the vCPUs should be normalized based on their
> > > +      performance relative to that 1024 KHz OPP. This makes it much easier to
> > > +      migrate the VM across systems which might have different physical CPU
> > > +      OPPs.
> > > +
> > > +required:
> > > +  - compatible
> > > +  - reg
> > > +
> > > +additionalProperties: false
> > > +
> > > +examples:
> > > +  - |
> > > +    // This example shows a two CPU configuration with a frequency domain
> > > +    // for each CPU showing normalized performance points.
> > > +    cpus {
> > > +      #address-cells = <1>;
> > > +      #size-cells = <0>;
> > > +
> > > +      cpu@0 {
> > > +        compatible = "arm,armv8";
> > > +        device_type = "cpu";
> > > +        reg = <0x0>;
> > > +        operating-points-v2 = <&opp_table0>;
> > > +      };
> > > +
> > > +      cpu@1 {
> > > +        compatible = "arm,armv8";
> > > +        device_type = "cpu";
> > > +        reg = <0x0>;
> > > +        operating-points-v2 = <&opp_table1>;
> > > +      };
> > > +    };
> > > +
> > > +    opp_table0: opp-table-0 {
> > > +      compatible = "operating-points-v2";
> > > +
> > > +      opp64000 { opp-hz = /bits/ 64 <64000>; };
> >
> > opp-64000 is the preferred form.
> >
> > > +      opp128000 { opp-hz = /bits/ 64 <128000>; };
> > > +      opp192000 { opp-hz = /bits/ 64 <192000>; };
> > > +      opp256000 { opp-hz = /bits/ 64 <256000>; };
> > > +      opp320000 { opp-hz = /bits/ 64 <320000>; };
> > > +      opp384000 { opp-hz = /bits/ 64 <384000>; };
> > > +      opp425000 { opp-hz = /bits/ 64 <425000>; };
> > > +    };
> > > +
> > > +    opp_table1: opp-table-1 {
> > > +      compatible = "operating-points-v2";
> > > +
> > > +      opp64000 { opp-hz = /bits/ 64 <64000>; };
> > > +      opp128000 { opp-hz = /bits/ 64 <128000>; };
> > > +      opp192000 { opp-hz = /bits/ 64 <192000>; };
> > > +      opp256000 { opp-hz = /bits/ 64 <256000>; };
> > > +      opp320000 { opp-hz = /bits/ 64 <320000>; };
> > > +      opp384000 { opp-hz = /bits/ 64 <384000>; };
> > > +      opp448000 { opp-hz = /bits/ 64 <448000>; };
> > > +      opp512000 { opp-hz = /bits/ 64 <512000>; };
> > > +      opp576000 { opp-hz = /bits/ 64 <576000>; };
> > > +      opp640000 { opp-hz = /bits/ 64 <640000>; };
> > > +      opp704000 { opp-hz = /bits/ 64 <704000>; };
> > > +      opp768000 { opp-hz = /bits/ 64 <768000>; };
> > > +      opp832000 { opp-hz = /bits/ 64 <832000>; };
> > > +      opp896000 { opp-hz = /bits/ 64 <896000>; };
> > > +      opp960000 { opp-hz = /bits/ 64 <960000>; };
> > > +      opp1024000 { opp-hz = /bits/ 64 <1024000>; };
> > > +
> > > +    };
> >
> > I don't recall your prior versions having an OPP table. Maybe it was
> > incomplete. You are designing the "h/w" interface. Why don't you make it
> > discoverable or implicit (fixed for the h/w)?
> 
> We also need the OPP tables to indicate which CPUs are part of the
> same cluster, etc. Don't want to invent a new "protocol" and just use
> existing DT bindings.

Topology binding is for that.

What about when x86 and other ACPI systems need to do this too? You 
define a discoverable interface, then it works regardless of firmware. 
KVM, Virtio, VFIO, etc. are all their own protocols.

> > Do you really need it if the frequency is normalized?
> 
> Yeah, we can have little and big CPUs and want to emulate different
> performance levels. So while the Fmax on big is 1024, we still want to
> be able to say little is 425. So we definitely need frequency tables.

You need per CPU Fmax, sure. But all the frequencies? I don't follow why 
you don't just have a max available capacity and then request the 
desired capacity. Then the host maps that to an underlying OPP. Why have 
an intermediate set of fake frequencies?

As these are normalized, I guess you are normalizing for capacity as 
well? Or you are using "capacity-dmips-mhz"? 

I'm also lost how this would work when you migrate and the underlying 
CPU changes. The DT is fixed.

> > Also, we have "opp-level" for opaque values that aren't Hz.
> 
> Still want to keep it Hz to be compatible with arch_freq_scale and
> when virtualized CPU perf counters are available.

Seems like no one would want "opp-level" then. Shrug.

Anyway, if Viresh and Marc are fine with all this, I'll shut up.

Rob
Marc Zyngier Feb. 4, 2024, 10:23 a.m. UTC | #3
On Fri, 02 Feb 2024 15:53:52 +0000,
Rob Herring <robh@kernel.org> wrote:
> 
> On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> > On Wed, Jan 31, 2024 at 9:06 AM Rob Herring <robh@kernel.org> wrote:
> > >
> > > On Fri, Jan 26, 2024 at 04:43:15PM -0800, David Dai wrote:
> > > > Adding bindings to represent a virtual cpufreq device.
> > > >
> > > > Virtual machines may expose MMIO regions for a virtual cpufreq device
> > > > for guests to read frequency information or to request frequency
> > > > selection. The virtual cpufreq device has an individual controller for
> > > > each frequency domain. Performance points for a given domain can be
> > > > normalized across all domains for ease of allowing for virtual machines
> > > > to migrate between hosts.
> > > >
> > > > Co-developed-by: Saravana Kannan <saravanak@google.com>
> > > > Signed-off-by: Saravana Kannan <saravanak@google.com>
> > > > Signed-off-by: David Dai <davidai@google.com>
> > > > ---
> > > >  .../cpufreq/qemu,cpufreq-virtual.yaml         | 110 ++++++++++++++++++
> > >
> > > > +    const: qemu,virtual-cpufreq
> > >
> > > Well, the filename almost matches the compatible.
> > >
> > > > +
> > > > +  reg:
> > > > +    maxItems: 1
> > > > +    description:
> > > > +      Address and size of region containing frequency controls for each of the
> > > > +      frequency domains. Regions for each frequency domain is placed
> > > > +      contiguously and contain registers for controlling DVFS(Dynamic Frequency
> > > > +      and Voltage) characteristics. The size of the region is proportional to
> > > > +      total number of frequency domains. This device also needs the CPUs to
> > > > +      list their OPPs using operating-points-v2 tables. The OPP tables for the
> > > > +      CPUs should use normalized "frequency" values where the OPP with the
> > > > +      highest performance among all the vCPUs is listed as 1024 KHz. The rest
> > > > +      of the frequencies of all the vCPUs should be normalized based on their
> > > > +      performance relative to that 1024 KHz OPP. This makes it much easier to
> > > > +      migrate the VM across systems which might have different physical CPU
> > > > +      OPPs.
> > > > +
> > > > +required:
> > > > +  - compatible
> > > > +  - reg
> > > > +
> > > > +additionalProperties: false
> > > > +
> > > > +examples:
> > > > +  - |
> > > > +    // This example shows a two CPU configuration with a frequency domain
> > > > +    // for each CPU showing normalized performance points.
> > > > +    cpus {
> > > > +      #address-cells = <1>;
> > > > +      #size-cells = <0>;
> > > > +
> > > > +      cpu@0 {
> > > > +        compatible = "arm,armv8";
> > > > +        device_type = "cpu";
> > > > +        reg = <0x0>;
> > > > +        operating-points-v2 = <&opp_table0>;
> > > > +      };
> > > > +
> > > > +      cpu@1 {
> > > > +        compatible = "arm,armv8";
> > > > +        device_type = "cpu";
> > > > +        reg = <0x0>;
> > > > +        operating-points-v2 = <&opp_table1>;
> > > > +      };
> > > > +    };
> > > > +
> > > > +    opp_table0: opp-table-0 {
> > > > +      compatible = "operating-points-v2";
> > > > +
> > > > +      opp64000 { opp-hz = /bits/ 64 <64000>; };
> > >
> > > opp-64000 is the preferred form.
> > >
> > > > +      opp128000 { opp-hz = /bits/ 64 <128000>; };
> > > > +      opp192000 { opp-hz = /bits/ 64 <192000>; };
> > > > +      opp256000 { opp-hz = /bits/ 64 <256000>; };
> > > > +      opp320000 { opp-hz = /bits/ 64 <320000>; };
> > > > +      opp384000 { opp-hz = /bits/ 64 <384000>; };
> > > > +      opp425000 { opp-hz = /bits/ 64 <425000>; };
> > > > +    };
> > > > +
> > > > +    opp_table1: opp-table-1 {
> > > > +      compatible = "operating-points-v2";
> > > > +
> > > > +      opp64000 { opp-hz = /bits/ 64 <64000>; };
> > > > +      opp128000 { opp-hz = /bits/ 64 <128000>; };
> > > > +      opp192000 { opp-hz = /bits/ 64 <192000>; };
> > > > +      opp256000 { opp-hz = /bits/ 64 <256000>; };
> > > > +      opp320000 { opp-hz = /bits/ 64 <320000>; };
> > > > +      opp384000 { opp-hz = /bits/ 64 <384000>; };
> > > > +      opp448000 { opp-hz = /bits/ 64 <448000>; };
> > > > +      opp512000 { opp-hz = /bits/ 64 <512000>; };
> > > > +      opp576000 { opp-hz = /bits/ 64 <576000>; };
> > > > +      opp640000 { opp-hz = /bits/ 64 <640000>; };
> > > > +      opp704000 { opp-hz = /bits/ 64 <704000>; };
> > > > +      opp768000 { opp-hz = /bits/ 64 <768000>; };
> > > > +      opp832000 { opp-hz = /bits/ 64 <832000>; };
> > > > +      opp896000 { opp-hz = /bits/ 64 <896000>; };
> > > > +      opp960000 { opp-hz = /bits/ 64 <960000>; };
> > > > +      opp1024000 { opp-hz = /bits/ 64 <1024000>; };
> > > > +
> > > > +    };
> > >
> > > I don't recall your prior versions having an OPP table. Maybe it was
> > > incomplete. You are designing the "h/w" interface. Why don't you make it
> > > discoverable or implicit (fixed for the h/w)?
> > 
> > We also need the OPP tables to indicate which CPUs are part of the
> > same cluster, etc. Don't want to invent a new "protocol" and just use
> > existing DT bindings.
> 
> Topology binding is for that.
> 
> What about when x86 and other ACPI systems need to do this too? You 
> define a discoverable interface, then it works regardless of firmware. 
> KVM, Virtio, VFIO, etc. are all their own protocols.

Describing the emulated HW in ACPI would seem appropriate to me. We
are talking about the guest here, so whether this is KVM or not is not
relevant. If this was actually using any soft of data transfer, using
virtio would have been acceptable. But given how simple this is,
piggybacking on virtio is hardly appropriate.

> 
> > > Do you really need it if the frequency is normalized?
> > 
> > Yeah, we can have little and big CPUs and want to emulate different
> > performance levels. So while the Fmax on big is 1024, we still want to
> > be able to say little is 425. So we definitely need frequency tables.
> 
> You need per CPU Fmax, sure. But all the frequencies? I don't follow why 
> you don't just have a max available capacity and then request the 
> desired capacity. Then the host maps that to an underlying OPP. Why have 
> an intermediate set of fake frequencies?
> 
> As these are normalized, I guess you are normalizing for capacity as 
> well? Or you are using "capacity-dmips-mhz"? 
> 
> I'm also lost how this would work when you migrate and the underlying 
> CPU changes. The DT is fixed.
> 
> > > Also, we have "opp-level" for opaque values that aren't Hz.
> > 
> > Still want to keep it Hz to be compatible with arch_freq_scale and
> > when virtualized CPU perf counters are available.
> 
> Seems like no one would want "opp-level" then. Shrug.
> 
> Anyway, if Viresh and Marc are fine with all this, I'll shut up.

Well, I've said it before, and I'll say it again: the use of
*frequencies* makes no sense. It is a lie (it doesn't describe any
hardware, physical nor virtual), and doesn't reflect the way the
emulated cpufreq controller behaves either (since it scales everything
back to what the host can potentially do)

The closest abstraction we have to this is the unit-less capacity. And
*that* reflects the way the emulated cpufreq controller works while
avoiding lying to the guest about some arbitrary frequency.

In practice, this changes nothing to either the code or the behaviour.
But it changes the binding.

	M.
Viresh Kumar Feb. 5, 2024, 8:38 a.m. UTC | #4
On 02-02-24, 09:53, Rob Herring wrote:
> On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> > We also need the OPP tables to indicate which CPUs are part of the
> > same cluster, etc. Don't want to invent a new "protocol" and just use
> > existing DT bindings.
> 
> Topology binding is for that.

This one, right ?

Documentation/devicetree/bindings/dvfs/performance-domain.yaml

> You need per CPU Fmax, sure. But all the frequencies? I don't follow why 
> you don't just have a max available capacity and then request the 
> desired capacity. Then the host maps that to an underlying OPP. Why have 
> an intermediate set of fake frequencies?

+1

> As these are normalized, I guess you are normalizing for capacity as 
> well? Or you are using "capacity-dmips-mhz"? 
> 
> I'm also lost how this would work when you migrate and the underlying 
> CPU changes. The DT is fixed.
> 
> > > Also, we have "opp-level" for opaque values that aren't Hz.
> > 
> > Still want to keep it Hz to be compatible with arch_freq_scale and
> > when virtualized CPU perf counters are available.

These are all specific to a driver only, that can be handled easily I guess. I
don't see a value to using Hz for this to be honest.
Rob Herring Feb. 5, 2024, 4:39 p.m. UTC | #5
On Mon, Feb 05, 2024 at 02:08:30PM +0530, Viresh Kumar wrote:
> On 02-02-24, 09:53, Rob Herring wrote:
> > On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> > > We also need the OPP tables to indicate which CPUs are part of the
> > > same cluster, etc. Don't want to invent a new "protocol" and just use
> > > existing DT bindings.
> > 
> > Topology binding is for that.
> 
> This one, right ?
> 
> Documentation/devicetree/bindings/dvfs/performance-domain.yaml

No, Documentation/devicetree/bindings/cpu/cpu-topology.txt (or the 
schema version of it in dtschema)

Rob
Sudeep Holla Feb. 15, 2024, 11:26 a.m. UTC | #6
On Fri, Feb 02, 2024 at 09:53:52AM -0600, Rob Herring wrote:
> On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> >
> > We also need the OPP tables to indicate which CPUs are part of the
> > same cluster, etc. Don't want to invent a new "protocol" and just use
> > existing DT bindings.
> 
> Topology binding is for that.
> 
> What about when x86 and other ACPI systems need to do this too? You 
> define a discoverable interface, then it works regardless of firmware. 
> KVM, Virtio, VFIO, etc. are all their own protocols.
>

+1 for the above. I have mentioned the same couple of times but I am told
it can be taken up later which I fail to understand. Once we define DT
bindings, it must be supported for long time which doesn't provide any
motivation to such a discoverable interface which works on any virtual
platforms irrespective of the firmware.

--
Regards,
Sudeep
Quentin Perret March 11, 2024, 11:40 a.m. UTC | #7
On Sunday 04 Feb 2024 at 10:23:00 (+0000), Marc Zyngier wrote:
> Well, I've said it before, and I'll say it again: the use of
> *frequencies* makes no sense. It is a lie (it doesn't describe any
> hardware, physical nor virtual), and doesn't reflect the way the
> emulated cpufreq controller behaves either (since it scales everything
> back to what the host can potentially do)
> 
> The closest abstraction we have to this is the unit-less capacity. And
> *that* reflects the way the emulated cpufreq controller works while
> avoiding lying to the guest about some arbitrary frequency.
> 
> In practice, this changes nothing to either the code or the behaviour.
> But it changes the binding.

Apologies all for jumping late into this, but for what it's worth,
regardless of the unit of the binding, Linux will shove that into
cpufreq's 'frequency table' anyway, which as the name suggests is very
much assuming frequencies :/ -- see how struct cpufreq_frequency_table
explicitely requires KHz. The worst part is that this even ends up
being reported to _userspace_ as frequencies in sysfs via cpufreq's
scaling_available_frequencies file, even when they're really not...

In the case of SCMI for example, IIRC the firmware can optionally (and
in practice I think it does for all older implementations of the spec
least) report unit-less operating points to the driver, which will then
happily pretend these are KHz values when reporting that into PM_OPP and
cpufreq -- see how scmi_dvfs_device_opps_add() simply multiplies the
level's 'perf' member by 1000 when populating PM_OPP (which is then
propagated to cpufreq's freq_table'). And a small extract from the SCMI
spec:

    "Certain platforms use IMPLEMENTATION DEFINED indices to identify
     performance levels. Level Indexing Mode is used to describe such
     platform behavior. The level indices associated with performance
     levels are neither guaranteed to be contiguous nor required to be
     on a linear scale."

Not nice, but unfortunately the core cpufreq framework has way too much
historical dependencies on things being frequencies to really change it
now, so we're pretty much stuck with that :(

So, while I do agree with the sentiment that this is a non-ideal place
to be, 'faking' frequencies is how we've addressed this so far in Linux,
so I'm personally not too fussed about David's usage of a freq-based DT
binding in this particular instance. On the plus side that allows to
re-use all of PM_OPP and cpufreq infrastructure as-is, so that's cool.

I guess we could make the argument that Linux's approach to handling
frequencies shouldn't influence this given that the binding should be OS
agnostic, but I can easily see how another OS could still make use of
that binding (and in fact requiring that this other OS can deal with
unitless frequencies is most likely going to be a bigger problem), so
I'd be inclined to think this isn't a major problem either.

Thanks,
Quentin
David Dai May 2, 2024, 8:17 p.m. UTC | #8
On Thu, Feb 15, 2024 at 3:26 AM Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Fri, Feb 02, 2024 at 09:53:52AM -0600, Rob Herring wrote:
> > On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> > >
> > > We also need the OPP tables to indicate which CPUs are part of the
> > > same cluster, etc. Don't want to invent a new "protocol" and just use
> > > existing DT bindings.
> >
> > Topology binding is for that.
> >
> > What about when x86 and other ACPI systems need to do this too? You
> > define a discoverable interface, then it works regardless of firmware.
> > KVM, Virtio, VFIO, etc. are all their own protocols.
> >
>
> +1 for the above. I have mentioned the same couple of times but I am told
> it can be taken up later which I fail to understand. Once we define DT
> bindings, it must be supported for long time which doesn't provide any
> motivation to such a discoverable interface which works on any virtual
> platforms irrespective of the firmware.
>

Hi Sudeep,

We are thinking of a discoverable interface like this, where the
performance info and performance domain mappings are discoverable
through the device registers. This should make it more portable across
firmwares. Would this address your concerns? Also, you asked to
document this. Where exactly would you want to document this? AFAIK
the DT bindings documentation is not supposed to include this level of
detail. Would a comment in the driver be sufficient?

CPU0..CPUn
+-------------+-------------------------------+--------+-------+
| Register    | Description                   | Offset |   Len |
+-------------+-------------------------------+--------+-------+
| cur_perf    | read this register to get     |    0x0 |   0x4 |
|             | the current perf (integer val |        |       |
|             | representing perf relative to |        |       |
|             | max performance)              |        |       |
|             | that vCPU is running at       |        |       |
+-------------+-------------------------------+--------+-------+
| set_perf    | write to this register to set |    0x4 |   0x4 |
|             | perf value of the vCPU        |        |       |
+-------------+-------------------------------+--------+-------+
| perftbl_len | number of entries in perf     |    0x8 |   0x4 |
|             | table. A single entry in the  |        |       |
|             | perf table denotes no table   |        |       |
|             | and the entry contains        |        |       |
|             | the maximum perf value        |        |       |
|             | that this vCPU supports.      |        |       |
|             | The guest can request any     |        |       |
|             | value between 1 and max perf. |        |       |
+---------------------------------------------+--------+-------+
| perftbl_sel | write to this register to     |    0xc |   0x4 |
|             | select perf table entry to    |        |       |
|             | read from                     |        |       |
+---------------------------------------------+--------+-------+
| perftbl_rd  | read this register to get     |   0x10 |   0x4 |
|             | perf value of the selected    |        |       |
|             | entry based on perftbl_sel    |        |       |
+---------------------------------------------+--------+-------+
| perf_domain | performance domain number     |   0x14 |   0x4 |
|             | that this vCPU belongs to.    |        |       |
|             | vCPUs sharing the same perf   |        |       |
|             | domain number are part of the |        |       |
|             | same performance domain.      |        |       |
+-------------+-------------------------------+--------+-------+

Thanks,
David


> --
> Regards,
> Sudeep
Sudeep Holla May 7, 2024, 10:21 a.m. UTC | #9
On Thu, May 02, 2024 at 01:17:57PM -0700, David Dai wrote:
> On Thu, Feb 15, 2024 at 3:26 AM Sudeep Holla <sudeep.holla@arm.com> wrote:
> >
> > On Fri, Feb 02, 2024 at 09:53:52AM -0600, Rob Herring wrote:
> > > On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> > > >
> > > > We also need the OPP tables to indicate which CPUs are part of the
> > > > same cluster, etc. Don't want to invent a new "protocol" and just use
> > > > existing DT bindings.
> > >
> > > Topology binding is for that.
> > >
> > > What about when x86 and other ACPI systems need to do this too? You
> > > define a discoverable interface, then it works regardless of firmware.
> > > KVM, Virtio, VFIO, etc. are all their own protocols.
> > >
> >
> > +1 for the above. I have mentioned the same couple of times but I am told
> > it can be taken up later which I fail to understand. Once we define DT
> > bindings, it must be supported for long time which doesn't provide any
> > motivation to such a discoverable interface which works on any virtual
> > platforms irrespective of the firmware.
> >
> 
> Hi Sudeep,
> 
> We are thinking of a discoverable interface like this, where the
> performance info and performance domain mappings are discoverable
> through the device registers. This should make it more portable across
> firmwares. Would this address your concerns?

Yes.

> Also, you asked to  document this.
> Where exactly would you want to document this?

IMO it could go under Documentation/firmware-guide ? Unless someone
has any other suggestions.

> AFAIK the DT bindings documentation is not supposed to include this level of
> detail. Would a comment in the driver be sufficient?

Agree, DT bindings is not the right place. May be even comment in the
driver would be sufficient.

Overall it looks good and on the right path IMO.

> 
> CPU0..CPUn
> +-------------+-------------------------------+--------+-------+
> | Register    | Description                   | Offset |   Len |
> +-------------+-------------------------------+--------+-------+
> | cur_perf    | read this register to get     |    0x0 |   0x4 |
> |             | the current perf (integer val |        |       |
> |             | representing perf relative to |        |       |
> |             | max performance)              |        |       |
> |             | that vCPU is running at       |        |       |
> +-------------+-------------------------------+--------+-------+
> | set_perf    | write to this register to set |    0x4 |   0x4 |
> |             | perf value of the vCPU        |        |       |
> +-------------+-------------------------------+--------+-------+
> | perftbl_len | number of entries in perf     |    0x8 |   0x4 |
> |             | table. A single entry in the  |        |       |
> |             | perf table denotes no table   |        |       |
> |             | and the entry contains        |        |       |
> |             | the maximum perf value        |        |       |
> |             | that this vCPU supports.      |        |       |
> |             | The guest can request any     |        |       |
> |             | value between 1 and max perf. |        |       |

Does this have to be per cpu ? It can be simplified by keeping
just cur_perf, set_perf and perf_domain in per-cpu entries and this
per domain entries separate. But I am not against per cpu entries
as well.

Also why do you need the table if the guest can request any value from
1 to max perf ? The table will have discrete OPPs ? If so, how to they
map to the perf range [1 - maxperf] ?

> +---------------------------------------------+--------+-------+
> | perftbl_sel | write to this register to     |    0xc |   0x4 |
> |             | select perf table entry to    |        |       |
> |             | read from                     |        |       |
> +---------------------------------------------+--------+-------+
> | perftbl_rd  | read this register to get     |   0x10 |   0x4 |
> |             | perf value of the selected    |        |       |
> |             | entry based on perftbl_sel    |        |       |
> +---------------------------------------------+--------+-------+
> | perf_domain | performance domain number     |   0x14 |   0x4 |
> |             | that this vCPU belongs to.    |        |       |
> |             | vCPUs sharing the same perf   |        |       |
> |             | domain number are part of the |        |       |
> |             | same performance domain.      |        |       |
> +-------------+-------------------------------+--------+-------+

The above are couple of high level questions I have ATM.

--
Regards,
Sudeep
David Dai May 17, 2024, 8:59 p.m. UTC | #10
On Tue, May 7, 2024 at 3:21 AM Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Thu, May 02, 2024 at 01:17:57PM -0700, David Dai wrote:
> > On Thu, Feb 15, 2024 at 3:26 AM Sudeep Holla <sudeep.holla@arm.com> wrote:
> > >
> > > On Fri, Feb 02, 2024 at 09:53:52AM -0600, Rob Herring wrote:
> > > > On Wed, Jan 31, 2024 at 10:23:03AM -0800, Saravana Kannan wrote:
> > > > >
> > > > > We also need the OPP tables to indicate which CPUs are part of the
> > > > > same cluster, etc. Don't want to invent a new "protocol" and just use
> > > > > existing DT bindings.
> > > >
> > > > Topology binding is for that.
> > > >
> > > > What about when x86 and other ACPI systems need to do this too? You
> > > > define a discoverable interface, then it works regardless of firmware.
> > > > KVM, Virtio, VFIO, etc. are all their own protocols.
> > > >
> > >
> > > +1 for the above. I have mentioned the same couple of times but I am told
> > > it can be taken up later which I fail to understand. Once we define DT
> > > bindings, it must be supported for long time which doesn't provide any
> > > motivation to such a discoverable interface which works on any virtual
> > > platforms irrespective of the firmware.
> > >
> >
> > Hi Sudeep,
> >
> > We are thinking of a discoverable interface like this, where the
> > performance info and performance domain mappings are discoverable
> > through the device registers. This should make it more portable across
> > firmwares. Would this address your concerns?
>
> Yes.
>
> > Also, you asked to  document this.
> > Where exactly would you want to document this?
>
> IMO it could go under Documentation/firmware-guide ? Unless someone
> has any other suggestions.
>
> > AFAIK the DT bindings documentation is not supposed to include this level of
> > detail. Would a comment in the driver be sufficient?
>
> Agree, DT bindings is not the right place. May be even comment in the
> driver would be sufficient.

Alright, I’ll make this into a comment in the driver itself.

>
> Overall it looks good and on the right path IMO.
>

Okay, I’ll submit V6 patches and continue from there.

> >
> > CPU0..CPUn
> > +-------------+-------------------------------+--------+-------+
> > | Register    | Description                   | Offset |   Len |
> > +-------------+-------------------------------+--------+-------+
> > | cur_perf    | read this register to get     |    0x0 |   0x4 |
> > |             | the current perf (integer val |        |       |
> > |             | representing perf relative to |        |       |
> > |             | max performance)              |        |       |
> > |             | that vCPU is running at       |        |       |
> > +-------------+-------------------------------+--------+-------+
> > | set_perf    | write to this register to set |    0x4 |   0x4 |
> > |             | perf value of the vCPU        |        |       |
> > +-------------+-------------------------------+--------+-------+
> > | perftbl_len | number of entries in perf     |    0x8 |   0x4 |
> > |             | table. A single entry in the  |        |       |
> > |             | perf table denotes no table   |        |       |
> > |             | and the entry contains        |        |       |
> > |             | the maximum perf value        |        |       |
> > |             | that this vCPU supports.      |        |       |
> > |             | The guest can request any     |        |       |
> > |             | value between 1 and max perf. |        |       |
>
> Does this have to be per cpu ? It can be simplified by keeping
> just cur_perf, set_perf and perf_domain in per-cpu entries and this
> per domain entries separate. But I am not against per cpu entries
> as well.

I think separating out the perf domain entries may make the device
emulation and the driver slightly more complicated. Emulating the perf
domain regions per CPU is a simpler layout if we need to install eBPF
programs to handle the backend per vCPU. Each vCPU looking up its own
frequency information in its own MMIO region is a bit easier too when
initializing the driver. Also each vCPU will be in its own perf domain
for the majority of the use cases, so it won’t make much of a
difference most of the time.

>
> Also why do you need the table if the guest can request any value from
> 1 to max perf ? The table will have discrete OPPs ? If so, how to they
> map to the perf range [1 - maxperf] ?

Let me clarify this in the comment, the perf range [1 - maxperf] is
only applicable in the case where the frequency table is not
supported. The cpufreq driver will still vote for discrete levels if
tables are used. The VMM(Virtual Machine Manager) may choose to use
tables depending on the use case and the driver will support both
cases.

Thanks,
David

>
> > +---------------------------------------------+--------+-------+
> > | perftbl_sel | write to this register to     |    0xc |   0x4 |
> > |             | select perf table entry to    |        |       |
> > |             | read from                     |        |       |
> > +---------------------------------------------+--------+-------+
> > | perftbl_rd  | read this register to get     |   0x10 |   0x4 |
> > |             | perf value of the selected    |        |       |
> > |             | entry based on perftbl_sel    |        |       |
> > +---------------------------------------------+--------+-------+
> > | perf_domain | performance domain number     |   0x14 |   0x4 |
> > |             | that this vCPU belongs to.    |        |       |
> > |             | vCPUs sharing the same perf   |        |       |
> > |             | domain number are part of the |        |       |
> > |             | same performance domain.      |        |       |
> > +-------------+-------------------------------+--------+-------+
>
> The above are couple of high level questions I have ATM.
>
> --
> Regards,
> Sudeep
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/cpufreq/qemu,cpufreq-virtual.yaml b/Documentation/devicetree/bindings/cpufreq/qemu,cpufreq-virtual.yaml
new file mode 100644
index 000000000000..cd617baf75e7
--- /dev/null
+++ b/Documentation/devicetree/bindings/cpufreq/qemu,cpufreq-virtual.yaml
@@ -0,0 +1,110 @@ 
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/cpufreq/qemu,cpufreq-virtual.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Virtual CPUFreq
+
+maintainers:
+  - David Dai <davidai@google.com>
+  - Saravana Kannan <saravanak@google.com>
+
+description:
+  Virtual CPUFreq is a virtualized driver in guest kernels that sends frequency
+  selection of its vCPUs as a hint to the host through MMIO regions. Each vCPU
+  is associated with a frequency domain which can be shared with other vCPUs.
+  Each frequency domain has its own set of registers for frequency controls.
+
+properties:
+  compatible:
+    const: qemu,virtual-cpufreq
+
+  reg:
+    maxItems: 1
+    description:
+      Address and size of region containing frequency controls for each of the
+      frequency domains. Regions for each frequency domain is placed
+      contiguously and contain registers for controlling DVFS(Dynamic Frequency
+      and Voltage) characteristics. The size of the region is proportional to
+      total number of frequency domains. This device also needs the CPUs to
+      list their OPPs using operating-points-v2 tables. The OPP tables for the
+      CPUs should use normalized "frequency" values where the OPP with the
+      highest performance among all the vCPUs is listed as 1024 KHz. The rest
+      of the frequencies of all the vCPUs should be normalized based on their
+      performance relative to that 1024 KHz OPP. This makes it much easier to
+      migrate the VM across systems which might have different physical CPU
+      OPPs.
+
+required:
+  - compatible
+  - reg
+
+additionalProperties: false
+
+examples:
+  - |
+    // This example shows a two CPU configuration with a frequency domain
+    // for each CPU showing normalized performance points.
+    cpus {
+      #address-cells = <1>;
+      #size-cells = <0>;
+
+      cpu@0 {
+        compatible = "arm,armv8";
+        device_type = "cpu";
+        reg = <0x0>;
+        operating-points-v2 = <&opp_table0>;
+      };
+
+      cpu@1 {
+        compatible = "arm,armv8";
+        device_type = "cpu";
+        reg = <0x0>;
+        operating-points-v2 = <&opp_table1>;
+      };
+    };
+
+    opp_table0: opp-table-0 {
+      compatible = "operating-points-v2";
+
+      opp64000 { opp-hz = /bits/ 64 <64000>; };
+      opp128000 { opp-hz = /bits/ 64 <128000>; };
+      opp192000 { opp-hz = /bits/ 64 <192000>; };
+      opp256000 { opp-hz = /bits/ 64 <256000>; };
+      opp320000 { opp-hz = /bits/ 64 <320000>; };
+      opp384000 { opp-hz = /bits/ 64 <384000>; };
+      opp425000 { opp-hz = /bits/ 64 <425000>; };
+    };
+
+    opp_table1: opp-table-1 {
+      compatible = "operating-points-v2";
+
+      opp64000 { opp-hz = /bits/ 64 <64000>; };
+      opp128000 { opp-hz = /bits/ 64 <128000>; };
+      opp192000 { opp-hz = /bits/ 64 <192000>; };
+      opp256000 { opp-hz = /bits/ 64 <256000>; };
+      opp320000 { opp-hz = /bits/ 64 <320000>; };
+      opp384000 { opp-hz = /bits/ 64 <384000>; };
+      opp448000 { opp-hz = /bits/ 64 <448000>; };
+      opp512000 { opp-hz = /bits/ 64 <512000>; };
+      opp576000 { opp-hz = /bits/ 64 <576000>; };
+      opp640000 { opp-hz = /bits/ 64 <640000>; };
+      opp704000 { opp-hz = /bits/ 64 <704000>; };
+      opp768000 { opp-hz = /bits/ 64 <768000>; };
+      opp832000 { opp-hz = /bits/ 64 <832000>; };
+      opp896000 { opp-hz = /bits/ 64 <896000>; };
+      opp960000 { opp-hz = /bits/ 64 <960000>; };
+      opp1024000 { opp-hz = /bits/ 64 <1024000>; };
+
+    };
+
+    soc {
+      #address-cells = <1>;
+      #size-cells = <1>;
+
+      cpufreq@1040000 {
+        compatible = "qemu,virtual-cpufreq";
+        reg = <0x1040000 0x10>;
+      };
+    };