Message ID | 1404390158-21542-2-git-send-email-parth.dixit@linaro.org |
---|---|
State | New |
Headers | show |
On Thu, 2014-07-03 at 17:52 +0530, Parth Dixit wrote: > static void do_trap_psci(struct cpu_user_regs *regs) > { > arm_psci_fn_t psci_call = NULL; > > - if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) > + if ( PSCI_OP_REG(regs) < PSCI_migrate ) > { > - domain_crash_synchronous(); > - return; > + if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) > + { > + domain_crash_synchronous(); > + return; > + } > + psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; > + } > + else > + { > + if ( ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) >= ARRAY_SIZE(arm_psci_0_2_table) ) I think you need to also check that "PSCI_OP_REG(regs) & ~PSCI_FN_MASK" is either the 32-bit or 64-bit base. Otherwise I could call 0xdeadfe01 and it would work. Do you not also need to check that the guest is of the appropriate type? Is an aarch64 guest allowed to call the aarch32 entry points? The spec doesn't say so explicitly AFAICT. If it is allowed then I think you need to be truncating the 32-bit arguments to 32-bits when an aarch64 guest calls the 32-bit entry points. Hrm, checking through the spec I see now that only some of the functions have both a 32- and 64-bit function id. VERSION, CPU_OFF, MIGRATE_INFO_TYPE, SYSTEM_OFF and SYSTEM_RESET only have a single function id (I suppose because they do not take any arguments which are mode specific). I'm afraid that AFAICT you need to handle this distinction, which I expect makes it rather hard to share the same lookup table between 32- and 64-bit. A switch(PSCI_OP_REG(regs)) is looking increasingly like the correct approach to me. > +int do_psci_0_2_version(void) > +{ > + struct domain *d = current->domain; > + > + return ( d->arch.vpsci_ver = XEN_PSCI_V_0_2 ); Is this assignment really intentional? I see you use d->arch.vpsci_ver later in the common do_psci_0_2_cpu_on function, so I think you did mean to do this. Does the spec say somewhere that after calling PSCI_0_2_VERSION an OS is forbidden from calling PSCI v0.1 functions? Likewise does it mandate that an OS which supports v0.2 always calls VERSION before any other function? I can't find anything in the spec which says either of those things. Please do point out if I am wrong. In any case I think this approach to a common function is wrong. I think you should implement something like: int do_common_cpu_on(int version, register_t target_cpu, register_t entry_point, register_t context_id) similar to your current do_psci_0_2_cpu_suspend but using version instead of d->arch.vpsci_ver. Then you have: int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) { return do_common_cpu_on(PSCI_0_1_VERSION, vcpuid, entry_point, 0); } int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, register_t context_id) { return do_common_cpu_on(PSCI_0_2_VERSION, target_cpu, entry_point, context_id) } IOW the mode of operation is determined by the entry point used not some bit of domain state. > +int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, > + register_t context_id) > +{ > struct vcpu *v; > struct domain *d = current->domain; > struct vcpu_guest_context *ctxt; > int rc; > int is_thumb = entry_point & 1; > + uint32_t vcpuid ; Stray " " here. > + > + if( d->arch.vpsci_ver == XEN_PSCI_V_0_2 ) > + vcpuid = (u32)(target_cpu & MPIDR_HWID_MASK); Doesn't this mask off AFF3 on 64-bit? I know we don't use AFF3 today in Xen, but this is setting a pretty subtle trap for the person who does come to implement something which uses it. > @@ -75,12 +144,65 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) > return PSCI_SUCCESS; > } > > -int do_psci_cpu_off(uint32_t power_state) > +static const unsigned long target_affinity_mask[] = { > + ( MPIDR_HWID_MASK & AFFINITY_MASK( 0 ) ), > + ( MPIDR_HWID_MASK & AFFINITY_MASK( 1 ) ), > + ( MPIDR_HWID_MASK & AFFINITY_MASK( 2 ) ) > +#ifdef CONFIG_ARM_64 > + ,( MPIDR_HWID_MASK & AFFINITY_MASK( 3 ) ) > +#endif > +}; > + > +int do_psci_0_2_affinity_info(register_t target_affinity, > + uint32_t lowest_affinity_level) > { > - struct vcpu *v = current; > - if ( !test_and_set_bit(_VPF_down, &v->pause_flags) ) > - vcpu_sleep_nosync(v); > - return PSCI_SUCCESS; > + struct domain *d = current->domain; > + struct vcpu *v; > + uint32_t vcpuid; > + > + if ( lowest_affinity_level < ARRAY_SIZE(target_affinity_mask) ) > + target_affinity &= target_affinity_mask[lowest_affinity_level]; > + else > + return PSCI_INVALID_PARAMETERS; > + > + for ( vcpuid = 0; vcpuid < d->max_vcpus; vcpuid++ ) > + { > + v = d->vcpu[vcpuid]; > + > + if ( ( ( v->arch.vmpidr & target_affinity_mask[lowest_affinity_level] ) if you assign target_affinity_mask[lowest_affinity_level] to a local variable you can avoid the awful state of line wrapping here. Ian.
On Wed, 2014-07-09 at 14:24 +0100, Ian Campbell wrote: > If it is allowed then I think you need to be truncating the 32-bit > arguments to 32-bits when an aarch64 guest calls the 32-bit entry > points. Looking at the SMC calling convention (ARM DEN 0028A) I think this is indeed the case, since it specifies the use of w0..w7 for arguments when an aarch64 cpu calls an SMC32 interface. It does say upper bits are zero, but I think you need to check? Ian.
Hi, On 9 July 2014 18:54, Ian Campbell <Ian.Campbell@citrix.com> wrote: > On Thu, 2014-07-03 at 17:52 +0530, Parth Dixit wrote: >> static void do_trap_psci(struct cpu_user_regs *regs) >> { >> arm_psci_fn_t psci_call = NULL; >> >> - if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) >> + if ( PSCI_OP_REG(regs) < PSCI_migrate ) >> { >> - domain_crash_synchronous(); >> - return; >> + if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) >> + { >> + domain_crash_synchronous(); >> + return; >> + } >> + psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; >> + } >> + else >> + { >> + if ( ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) >= ARRAY_SIZE(arm_psci_0_2_table) ) > > I think you need to also check that "PSCI_OP_REG(regs) & ~PSCI_FN_MASK" > is either the 32-bit or 64-bit base. Otherwise I could call 0xdeadfe01 > and it would work. > > Do you not also need to check that the guest is of the appropriate type? > Is an aarch64 guest allowed to call the aarch32 entry points? The spec > doesn't say so explicitly AFAICT. > > If it is allowed then I think you need to be truncating the 32-bit > arguments to 32-bits when an aarch64 guest calls the 32-bit entry > points. > > Hrm, checking through the spec I see now that only some of the functions > have both a 32- and 64-bit function id. VERSION, CPU_OFF, > MIGRATE_INFO_TYPE, SYSTEM_OFF and SYSTEM_RESET only have a single > function id (I suppose because they do not take any arguments which are > mode specific). > > I'm afraid that AFAICT you need to handle this distinction, which I > expect makes it rather hard to share the same lookup table between 32- > and 64-bit. A switch(PSCI_OP_REG(regs)) is looking increasingly like the > correct approach to me. I am bit confused, are you saying for eg. aarch64 can call "psci_0_2_suspend" function with aarch32 function id (effectively 32 bit version of function?) > >> +int do_psci_0_2_version(void) >> +{ >> + struct domain *d = current->domain; >> + >> + return ( d->arch.vpsci_ver = XEN_PSCI_V_0_2 ); > > Is this assignment really intentional? > > I see you use d->arch.vpsci_ver later in the common do_psci_0_2_cpu_on > function, so I think you did mean to do this. > > Does the spec say somewhere that after calling PSCI_0_2_VERSION an OS is > forbidden from calling PSCI v0.1 functions? > > Likewise does it mandate that an OS which supports v0.2 always calls > VERSION before any other function? > > I can't find anything in the spec which says either of those things. > Please do point out if I am wrong. > > In any case I think this approach to a common function is wrong. I think > you should implement something like: > > int do_common_cpu_on(int version, register_t target_cpu, register_t entry_point, > register_t context_id) > > similar to your current do_psci_0_2_cpu_suspend but using version > instead of d->arch.vpsci_ver. Then you have: > > int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) > { > return do_common_cpu_on(PSCI_0_1_VERSION, vcpuid, entry_point, 0); > } > int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, > register_t context_id) > { > return do_common_cpu_on(PSCI_0_2_VERSION, target_cpu, > entry_point, context_id) > } > > IOW the mode of operation is determined by the entry point used not some > bit of domain state. > Sure, i'll switch to entry point based approach. >> +int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, >> + register_t context_id) >> +{ >> struct vcpu *v; >> struct domain *d = current->domain; >> struct vcpu_guest_context *ctxt; >> int rc; >> int is_thumb = entry_point & 1; >> + uint32_t vcpuid ; > > Stray " " here. Will remove it next patchset >> + >> + if( d->arch.vpsci_ver == XEN_PSCI_V_0_2 ) >> + vcpuid = (u32)(target_cpu & MPIDR_HWID_MASK); > > Doesn't this mask off AFF3 on 64-bit? I know we don't use AFF3 today in > Xen, but this is setting a pretty subtle trap for the person who does > come to implement something which uses it. > Sure, will take care of it. >> @@ -75,12 +144,65 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) >> return PSCI_SUCCESS; >> } >> >> -int do_psci_cpu_off(uint32_t power_state) >> +static const unsigned long target_affinity_mask[] = { >> + ( MPIDR_HWID_MASK & AFFINITY_MASK( 0 ) ), >> + ( MPIDR_HWID_MASK & AFFINITY_MASK( 1 ) ), >> + ( MPIDR_HWID_MASK & AFFINITY_MASK( 2 ) ) >> +#ifdef CONFIG_ARM_64 >> + ,( MPIDR_HWID_MASK & AFFINITY_MASK( 3 ) ) >> +#endif >> +}; >> + >> +int do_psci_0_2_affinity_info(register_t target_affinity, >> + uint32_t lowest_affinity_level) >> { >> - struct vcpu *v = current; >> - if ( !test_and_set_bit(_VPF_down, &v->pause_flags) ) >> - vcpu_sleep_nosync(v); >> - return PSCI_SUCCESS; >> + struct domain *d = current->domain; >> + struct vcpu *v; >> + uint32_t vcpuid; >> + >> + if ( lowest_affinity_level < ARRAY_SIZE(target_affinity_mask) ) >> + target_affinity &= target_affinity_mask[lowest_affinity_level]; >> + else >> + return PSCI_INVALID_PARAMETERS; >> + >> + for ( vcpuid = 0; vcpuid < d->max_vcpus; vcpuid++ ) >> + { >> + v = d->vcpu[vcpuid]; >> + >> + if ( ( ( v->arch.vmpidr & target_affinity_mask[lowest_affinity_level] ) > > if you assign target_affinity_mask[lowest_affinity_level] to a local > variable you can avoid the awful state of line wrapping here. > will introduce local variable > Ian. >
On Thu, 2014-07-10 at 15:44 +0530, Parth Dixit wrote: > Hi, > > On 9 July 2014 18:54, Ian Campbell <Ian.Campbell@citrix.com> wrote: > > On Thu, 2014-07-03 at 17:52 +0530, Parth Dixit wrote: > >> static void do_trap_psci(struct cpu_user_regs *regs) > >> { > >> arm_psci_fn_t psci_call = NULL; > >> > >> - if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) > >> + if ( PSCI_OP_REG(regs) < PSCI_migrate ) > >> { > >> - domain_crash_synchronous(); > >> - return; > >> + if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) > >> + { > >> + domain_crash_synchronous(); > >> + return; > >> + } > >> + psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; > >> + } > >> + else > >> + { > >> + if ( ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) >= ARRAY_SIZE(arm_psci_0_2_table) ) > > > > I think you need to also check that "PSCI_OP_REG(regs) & ~PSCI_FN_MASK" > > is either the 32-bit or 64-bit base. Otherwise I could call 0xdeadfe01 > > and it would work. > > > > Do you not also need to check that the guest is of the appropriate type? > > Is an aarch64 guest allowed to call the aarch32 entry points? The spec > > doesn't say so explicitly AFAICT. > > > > If it is allowed then I think you need to be truncating the 32-bit > > arguments to 32-bits when an aarch64 guest calls the 32-bit entry > > points. > > > > Hrm, checking through the spec I see now that only some of the functions > > have both a 32- and 64-bit function id. VERSION, CPU_OFF, > > MIGRATE_INFO_TYPE, SYSTEM_OFF and SYSTEM_RESET only have a single > > function id (I suppose because they do not take any arguments which are > > mode specific). > > > > I'm afraid that AFAICT you need to handle this distinction, which I > > expect makes it rather hard to share the same lookup table between 32- > > and 64-bit. A switch(PSCI_OP_REG(regs)) is looking increasingly like the > > correct approach to me. > > I am bit confused, are you saying for eg. aarch64 can call > "psci_0_2_suspend" function with aarch32 function id (effectively 32 > bit version of function?) SUSPEND is not a good example since it has both 32- and 64-bit fn ids. But e.g CPU_OFF only defines a single id, 0x84000002. I think calling c4000002 would need to be rejected. It seems like the SMC Calling Convention doc which I referenced covers the requirements here. Ian.
Hi, On 10 July 2014 15:58, Ian Campbell <ian.campbell@citrix.com> wrote: > On Thu, 2014-07-10 at 15:44 +0530, Parth Dixit wrote: >> Hi, >> >> On 9 July 2014 18:54, Ian Campbell <Ian.Campbell@citrix.com> wrote: >> > On Thu, 2014-07-03 at 17:52 +0530, Parth Dixit wrote: >> >> static void do_trap_psci(struct cpu_user_regs *regs) >> >> { >> >> arm_psci_fn_t psci_call = NULL; >> >> >> >> - if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) >> >> + if ( PSCI_OP_REG(regs) < PSCI_migrate ) >> >> { >> >> - domain_crash_synchronous(); >> >> - return; >> >> + if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) >> >> + { >> >> + domain_crash_synchronous(); >> >> + return; >> >> + } >> >> + psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; >> >> + } >> >> + else >> >> + { >> >> + if ( ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) >= ARRAY_SIZE(arm_psci_0_2_table) ) >> > >> > I think you need to also check that "PSCI_OP_REG(regs) & ~PSCI_FN_MASK" >> > is either the 32-bit or 64-bit base. Otherwise I could call 0xdeadfe01 >> > and it would work. >> > >> > Do you not also need to check that the guest is of the appropriate type? >> > Is an aarch64 guest allowed to call the aarch32 entry points? The spec >> > doesn't say so explicitly AFAICT. >> > >> > If it is allowed then I think you need to be truncating the 32-bit >> > arguments to 32-bits when an aarch64 guest calls the 32-bit entry >> > points. >> > >> > Hrm, checking through the spec I see now that only some of the functions >> > have both a 32- and 64-bit function id. VERSION, CPU_OFF, >> > MIGRATE_INFO_TYPE, SYSTEM_OFF and SYSTEM_RESET only have a single >> > function id (I suppose because they do not take any arguments which are >> > mode specific). >> > >> > I'm afraid that AFAICT you need to handle this distinction, which I >> > expect makes it rather hard to share the same lookup table between 32- >> > and 64-bit. A switch(PSCI_OP_REG(regs)) is looking increasingly like the >> > correct approach to me. >> >> I am bit confused, are you saying for eg. aarch64 can call >> "psci_0_2_suspend" function with aarch32 function id (effectively 32 >> bit version of function?) > > SUSPEND is not a good example since it has both 32- and 64-bit fn ids. > > But e.g CPU_OFF only defines a single id, 0x84000002. I think calling > c4000002 would need to be rejected. > > It seems like the SMC Calling Convention doc which I referenced covers > the requirements here. > Ok, got it, will take care in patcset v4 > Ian. > Thanks parth
On Thu, 2014-07-10 at 16:04 +0530, Parth Dixit wrote: > > It seems like the SMC Calling Convention doc which I referenced covers > > the requirements here. FWIW I asked Charles (the author of the spec) about this, his response is below (with permission). I've not yet looked at your v4 with this in mind. Ian. --------- > For functions which have both an SMC32 and SMC64 variant is it it > permissible for a cpu to call the entry point which doesn't correspond > to the mode it is in? From the SMC calling convention spec I think it > is illegal for an aarch32 cpu to call an SMC64 interface but permissible > for an aarch64 cpu to call SMC32, since not all interfaces have an > SMC64 variant. Is that true even when an SMC64 variant is defined? This is valid for the SCM calling convention but not in general, for PSCI, in fact the spec requires INVALID_PARAMETERS to be returned in the case the execution state of the caller (AArch32/64) doesn't match for the ID. It' valid for the SMC calling convention as you may have calls which make sense as 32 bit calls in the first place, and for which there is no 64 equivalent. There may even be cases where a caller specifically wants to use a 32 bit variant for a given SMC based API (though I can't think of any off the top my head). For the PSCI calls that are affected by this issue it would have been complicated to allow this. It would have been confusing for a 64 bit caller to call say SUSPEND as 32 bit, but obviously would expect to be resumed as 64. > What is the expected behaviour if the top bits of xN are non-zero for > an aarch64 cpu calling an SMC32 interface? Results are passed in Wx and returned in Wx, so I would expect the implementer is not required to preserve upper words, as the SMC32 calling convention is dictating use of 32 bit widths in the first place for parameters and returns When you load into a Wx it will trash the upper word (make it zero). So coming into the function you'd expect there are zeros there anyway. Returns should be through Wx so an implementation should not use upper words when returning values, nor should the callee assume anything there. > Is it permissible to clobber the top bits of the return register in > this case? Yup, in theory there should zeros there. > Are cpus allowed to mix and match the use of v0.1 and v0.2 interfaces > (assuming both have been advertised to them)? > > Is there any requirement to call e.g. VERSION before using other v0.2 > interfaces? Well in an ACPI world we are going to be explicitly linking to PSCI v0.2 and beyond. So there is no need to support both there. On an DT world you do have the complication that somebody could in theory supply the original PSCI. However in that case the DT should be describing that. PSCI_VERSION will come more into its own when we rev PSCI itself, as it will always be a mandatory API going forward.
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index 4f0f0e2..e8bcd05 100644 --- a/tools/libxl/libxl_arm.c +++ b/tools/libxl/libxl_arm.c @@ -237,7 +237,7 @@ static int make_psci_node(libxl__gc *gc, void *fdt) res = fdt_begin_node(fdt, "psci"); if (res) return res; - res = fdt_property_compat(gc, fdt, 1, "arm,psci"); + res = fdt_property_compat(gc, fdt, 2, "arm,psci-0.2","arm,psci"); if (res) return res; res = fdt_property_string(fdt, "method", "hvc"); diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index bc5e56d..f19cc1d 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -504,6 +504,8 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags) /* Default the virtual ID to match the physical */ d->arch.vpidr = boot_cpu_data.midr.bits; + d->arch.vpsci_ver = 0; + clear_page(d->shared_info); share_xen_page_with_guest( virt_to_page(d->shared_info), d, XENSHARE_writable); diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index c424793..ebd4170 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -388,6 +388,9 @@ static int make_hypervisor_node(struct domain *d, static int make_psci_node(void *fdt, const struct dt_device_node *parent) { int res; + const char compat[] = + "arm,psci-0.2""\0" + "arm,psci"; DPRINT("Create PSCI node\n"); @@ -396,7 +399,7 @@ static int make_psci_node(void *fdt, const struct dt_device_node *parent) if ( res ) return res; - res = fdt_property_string(fdt, "compatible", "arm,psci"); + res = fdt_property(fdt, "compatible", compat, sizeof(compat)); if ( res ) return res; diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 8d2e308..b441e51 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1030,7 +1030,7 @@ static arm_hypercall_t arm_hypercall_table[] = { HYPERCALL_ARM(vcpu_op, 3), }; -typedef int (*arm_psci_fn_t)(uint32_t, register_t); +typedef int (*arm_psci_fn_t)(register_t, register_t, register_t); typedef struct { arm_psci_fn_t fn; @@ -1043,11 +1043,30 @@ typedef struct { .nr_args = _nr_args, \ } +#define PSCI_0_2(_name, _nr_args) \ + { \ + .fn = (arm_psci_fn_t) &do_psci_0_2_ ## _name, \ + .nr_args = _nr_args, \ + } + static arm_psci_t arm_psci_table[] = { PSCI(cpu_off, 1), PSCI(cpu_on, 2), }; +static arm_psci_t arm_psci_0_2_table[] = { + PSCI_0_2(version, 0), + PSCI_0_2(cpu_suspend, 3), + PSCI_0_2(cpu_off, 0), + PSCI_0_2(cpu_on, 3), + PSCI_0_2(affinity_info, 2), + PSCI_0_2(migrate, 1), + PSCI_0_2(migrate_info_type, 0), + PSCI_0_2(migrate_info_up_cpu, 0), + PSCI_0_2(system_off, 0), + PSCI_0_2(system_reset, 0), +}; + #ifndef NDEBUG static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code) { @@ -1082,24 +1101,36 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code) #ifdef CONFIG_ARM_64 #define PSCI_OP_REG(r) (r)->x0 #define PSCI_RESULT_REG(r) (r)->x0 -#define PSCI_ARGS(r) (r)->x1, (r)->x2 +#define PSCI_ARGS(r) (r)->x1, (r)->x2, (r)->x3 #else #define PSCI_OP_REG(r) (r)->r0 #define PSCI_RESULT_REG(r) (r)->r0 -#define PSCI_ARGS(r) (r)->r1, (r)->r2 +#define PSCI_ARGS(r) (r)->r1, (r)->r2, (r)->r3 #endif static void do_trap_psci(struct cpu_user_regs *regs) { arm_psci_fn_t psci_call = NULL; - if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) + if ( PSCI_OP_REG(regs) < PSCI_migrate ) { - domain_crash_synchronous(); - return; + if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) + { + domain_crash_synchronous(); + return; + } + psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; + } + else + { + if ( ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) >= ARRAY_SIZE(arm_psci_0_2_table) ) + { + domain_crash_synchronous(); + return; + } + psci_call = arm_psci_0_2_table[ ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) ].fn; } - psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; if ( psci_call == NULL ) { domain_crash_synchronous(); diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c index 1ceb8cb..859bb7e 100644 --- a/xen/arch/arm/vpsci.c +++ b/xen/arch/arm/vpsci.c @@ -17,24 +17,85 @@ #include <asm/current.h> #include <asm/gic.h> #include <asm/psci.h> +#include <public/sched.h> +#include <asm-arm/event.h> int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) { + return do_psci_0_2_cpu_on(vcpuid,entry_point,0); +} + +int do_psci_cpu_off(uint32_t power_state) +{ + struct vcpu *v = current; + if ( !test_and_set_bit(_VPF_down, &v->pause_flags) ) + vcpu_sleep_nosync(v); + return PSCI_SUCCESS; +} + +int do_psci_0_2_version(void) +{ + struct domain *d = current->domain; + + return ( d->arch.vpsci_ver = XEN_PSCI_V_0_2 ); +} + +int do_psci_0_2_cpu_suspend(uint32_t power_state, register_t entry_point, + register_t context_id) +{ + struct vcpu *v = current; + struct domain *d = v->domain; + struct cpu_user_regs *regs = &v->arch.cpu_info->guest_cpu_user_regs; + + if ( is_32bit_domain(d) ) + { + regs->pc32 = entry_point; + regs->r0 = context_id; + } +#ifdef CONFIG_ARM_64 + else + { + regs->pc = entry_point; + regs->x0 = context_id; + } +#endif + vcpu_block_event(v); + return PSCI_SUCCESS; +} + +int do_psci_0_2_cpu_off(void) +{ + return do_psci_cpu_off(0); +} + +int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, + register_t context_id) +{ struct vcpu *v; struct domain *d = current->domain; struct vcpu_guest_context *ctxt; int rc; int is_thumb = entry_point & 1; + uint32_t vcpuid ; + + if( d->arch.vpsci_ver == XEN_PSCI_V_0_2 ) + vcpuid = (u32)(target_cpu & MPIDR_HWID_MASK); + else + vcpuid = target_cpu; if ( (vcpuid < 0) || (vcpuid >= MAX_VIRT_CPUS) ) - return PSCI_EINVAL; + return PSCI_INVALID_PARAMETERS; if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) - return PSCI_EINVAL; + return PSCI_INVALID_PARAMETERS; /* THUMB set is not allowed with 64-bit domain */ if ( is_64bit_domain(d) && is_thumb ) - return PSCI_EINVAL; + return PSCI_INVALID_PARAMETERS; + + if( ( d->arch.vpsci_ver == XEN_PSCI_V_0_2 ) && + ( !test_bit(_VPF_down, &v->pause_flags) ) ) + return PSCI_ALREADY_ON; if ( (ctxt = alloc_vcpu_guest_context()) == NULL ) return PSCI_DENIED; @@ -48,10 +109,18 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) ctxt->ttbr1 = 0; ctxt->ttbcr = 0; /* Defined Reset Value */ if ( is_32bit_domain(d) ) + { ctxt->user_regs.cpsr = PSR_GUEST32_INIT; + if( d->arch.vpsci_ver == XEN_PSCI_V_0_2 ) + ctxt->user_regs.r0_usr = context_id; + } #ifdef CONFIG_ARM_64 else + { ctxt->user_regs.cpsr = PSR_GUEST64_INIT; + if( d->arch.vpsci_ver == XEN_PSCI_V_0_2 ) + ctxt->user_regs.x0 = context_id; + } #endif /* Start the VCPU with THUMB set if it's requested by the kernel */ @@ -75,12 +144,65 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) return PSCI_SUCCESS; } -int do_psci_cpu_off(uint32_t power_state) +static const unsigned long target_affinity_mask[] = { + ( MPIDR_HWID_MASK & AFFINITY_MASK( 0 ) ), + ( MPIDR_HWID_MASK & AFFINITY_MASK( 1 ) ), + ( MPIDR_HWID_MASK & AFFINITY_MASK( 2 ) ) +#ifdef CONFIG_ARM_64 + ,( MPIDR_HWID_MASK & AFFINITY_MASK( 3 ) ) +#endif +}; + +int do_psci_0_2_affinity_info(register_t target_affinity, + uint32_t lowest_affinity_level) { - struct vcpu *v = current; - if ( !test_and_set_bit(_VPF_down, &v->pause_flags) ) - vcpu_sleep_nosync(v); - return PSCI_SUCCESS; + struct domain *d = current->domain; + struct vcpu *v; + uint32_t vcpuid; + + if ( lowest_affinity_level < ARRAY_SIZE(target_affinity_mask) ) + target_affinity &= target_affinity_mask[lowest_affinity_level]; + else + return PSCI_INVALID_PARAMETERS; + + for ( vcpuid = 0; vcpuid < d->max_vcpus; vcpuid++ ) + { + v = d->vcpu[vcpuid]; + + if ( ( ( v->arch.vmpidr & target_affinity_mask[lowest_affinity_level] ) + == target_affinity ) + && ( !test_bit(_VPF_down, &v->pause_flags) ) ) + return PSCI_0_2_AFFINITY_LEVEL_ON; + } + + return PSCI_0_2_AFFINITY_LEVEL_OFF; +} + +int do_psci_0_2_migrate(uint32_t target_cpu) +{ + return PSCI_NOT_SUPPORTED; +} + +int do_psci_0_2_migrate_info_type(void) +{ + return PSCI_0_2_TOS_MP_OR_NOT_PRESENT; +} + +register_t do_psci_0_2_migrate_info_up_cpu(void) +{ + return PSCI_NOT_SUPPORTED; +} + +void do_psci_0_2_system_off( void ) +{ + struct domain *d = current->domain; + domain_shutdown(d,SHUTDOWN_poweroff); +} + +void do_psci_0_2_system_reset(void) +{ + struct domain *d = current->domain; + domain_shutdown(d,SHUTDOWN_reboot); } /* diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index b296923..53dbe27 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -124,6 +124,9 @@ struct arch_domain /* Virtual CPUID */ uint32_t vpidr; + /* PSCI version */ + uint32_t vpsci_ver; + struct { uint64_t offset; } phys_timer_base; diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index 9267c1b..8a237c4 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -16,6 +16,9 @@ #define MPIDR_AFF0_MASK (_AC(0xff,U) << MPIDR_AFF0_SHIFT) #define MPIDR_HWID_MASK _AC(0xffffff,U) #define MPIDR_INVALID (~MPIDR_HWID_MASK) +#define MPIDR_LEVEL_BITS (8) +#define AFFINITY_MASK(level) ~((_AC(0x1,U) << ((level) * MPIDR_LEVEL_BITS)) - 1) + /* TTBCR Translation Table Base Control Register */ #define TTBCR_EAE _AC(0x80000000,U) diff --git a/xen/include/asm-arm/psci.h b/xen/include/asm-arm/psci.h index 189964b..e8d870f 100644 --- a/xen/include/asm-arm/psci.h +++ b/xen/include/asm-arm/psci.h @@ -1,10 +1,16 @@ #ifndef __ASM_PSCI_H__ #define __ASM_PSCI_H__ -#define PSCI_SUCCESS 0 -#define PSCI_ENOSYS -1 -#define PSCI_EINVAL -2 -#define PSCI_DENIED -3 +/* PSCI return values (inclusive of all PSCI versions) */ +#define PSCI_SUCCESS 0 +#define PSCI_NOT_SUPPORTED -1 +#define PSCI_INVALID_PARAMETERS -2 +#define PSCI_DENIED -3 +#define PSCI_ALREADY_ON -4 +#define PSCI_ON_PENDING -5 +#define PSCI_INTERNAL_FAILURE -6 +#define PSCI_NOT_PRESENT -7 +#define PSCI_DISABLED -8 /* availability of PSCI on the host for SMP bringup */ extern bool_t psci_available; @@ -18,6 +24,37 @@ int do_psci_cpu_off(uint32_t power_state); int do_psci_cpu_suspend(uint32_t power_state, register_t entry_point); int do_psci_migrate(uint32_t vcpuid); +/* PSCI 0.2 functions to handle guest PSCI requests */ +int do_psci_0_2_version(void); +int do_psci_0_2_cpu_suspend(uint32_t power_state, register_t entry_point, + register_t context_id); +int do_psci_0_2_cpu_off(void); +int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, + register_t context_id); +int do_psci_0_2_affinity_info(register_t target_affinity, + uint32_t lowest_affinity_level); +int do_psci_0_2_migrate(uint32_t target_cpu); +int do_psci_0_2_migrate_info_type(void); +register_t do_psci_0_2_migrate_info_up_cpu(void); +void do_psci_0_2_system_off(void); +void do_psci_0_2_system_reset(void); + +/* PSCI version */ +#define XEN_PSCI_V_0_2 2 + +/* PSCI v0.2 xen mapping mask */ +#define PSCI_FN_MASK 0x0000000F + +/* PSCI v0.2 affinity level state returned by AFFINITY_INFO */ +#define PSCI_0_2_AFFINITY_LEVEL_ON 0 +#define PSCI_0_2_AFFINITY_LEVEL_OFF 1 +#define PSCI_0_2_AFFINITY_LEVEL_ON_PENDING 2 + +/* PSCI v0.2 multicore support in Trusted OS returned by MIGRATE_INFO_TYPE */ +#define PSCI_0_2_TOS_UP_MIGRATE_CAPABLE 0 +#define PSCI_0_2_TOS_UP_NOT_MIGRATE_CAPABLE 1 +#define PSCI_0_2_TOS_MP_OR_NOT_PRESENT 2 + #endif /* __ASM_PSCI_H__ */ /*
Arm based virtual machines dom0/guest will request power related functionality from xen through PSCI interface. This patch implements version 0.2 of PSCI standard specified by arm for 64bit and 32 bit arm machines. - modified arm_psci_fn_t to take three arguments - implemented psci_cpu_on with additional error conditions - removed switch-case in do_trap_psci function - added PSCI v0.2 macros in psci.h - added seperate tables for PSCI v0.1 and v0.2 - implemented affinity_info Signed-off-by: Parth Dixit <parth.dixit@linaro.org> --- Changelog v3 - moved wfi helper to seperate patch - replaced new wfi function in traps.c - removed defining power_state variable using value directly - added new line between declaration - merged PSCI v0.1 and v0.2 cpu_on function to avoid duplication - removed spurious change - renamed PSCI return values to reflect v0.2 moved them to top - removed PSCI v0.1 version declaration for now, will introduce it when needed - removed hard tabs in the code lifted from linux - removed PSCI_0_1_MAX - did not retained copyright header from linux as most of functions are removed - added two function tables for PSCI v0.1 and PSCI v0.2 - added compatibility string to libxl_arm to expose new functionality - refactored affinity_info code - added table for masking function - removed definitions of unused PSCI v0.2 functions - removed function id's of unused PSCI v0.2 functions - renamed constant PSCI_0_2_TOS_MP ( multicore aware) as per spec tools/libxl/libxl_arm.c | 2 +- xen/arch/arm/domain.c | 2 + xen/arch/arm/domain_build.c | 5 +- xen/arch/arm/traps.c | 45 +++++++++++-- xen/arch/arm/vpsci.c | 138 +++++++++++++++++++++++++++++++++++++--- xen/include/asm-arm/domain.h | 3 + xen/include/asm-arm/processor.h | 3 + xen/include/asm-arm/psci.h | 45 +++++++++++-- 8 files changed, 222 insertions(+), 21 deletions(-)