Message ID | 20190731160719.11396-25-alex.bennee@linaro.org |
---|---|
State | New |
Headers | show |
Series | plugins for TCG | expand |
On Jul 31 17:06, Alex Bennée wrote: > We need to keep a local per-cpu copy of the data as other threads may > be running. We use a automatically growing array and re-use the space > for subsequent queries. [...] > +bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, > + bool is_store, struct qemu_plugin_hwaddr *data) > +{ > + CPUArchState *env = cpu->env_ptr; > + CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr); > + target_ulong tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read; > + > + if (tlb_hit(tlb_addr, addr)) { > + if (tlb_addr & TLB_MMIO) { > + data->hostaddr = 0; > + data->is_io = true; > + /* XXX: lookup device */ > + } else { > + data->hostaddr = addr + tlbe->addend; > + data->is_io = false; > + } > + return true; > + } > + return false; > +} In what cases do you expect tlb_hit() should not evaluate to true here? Will returns of false only be in error cases, or do you expect it can occur during normal operation? In particular, I'm interested in ensuring this is as reliable as possible, since some plugins may require physical addresses. > +struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info, > + uint64_t vaddr) > +{ > + CPUState *cpu = current_cpu; > + unsigned int mmu_idx = info >> TRACE_MEM_MMU_SHIFT; > + struct qemu_plugin_hwaddr *hwaddr; > + > + /* Ensure we have memory allocated for this work */ > + if (!hwaddr_refs) { > + hwaddr_refs = g_array_sized_new(false, true, > + sizeof(struct qemu_plugin_hwaddr), > + cpu->cpu_index + 1); > + } else if (cpu->cpu_index >= hwaddr_refs->len) { > + hwaddr_refs = g_array_set_size(hwaddr_refs, cpu->cpu_index + 1); > + } Are there one or more race conditions with the allocations here? If so, could they be solved by doing the allocations at plugin initialization and when the number of online cpu's changes, instead of lazily? > uint64_t qemu_plugin_hwaddr_to_raddr(const struct qemu_plugin_hwaddr *haddr) I was at first confused about the utility of this function until I (re-?)discovered you had to convert first to hwaddr and then raddr to get a "true" physical address. Perhaps that could be added to a comment here or in the API definition in the main plugin header file. -Aaron
On 8/1/19 7:14 AM, Aaron Lindsay OS via Qemu-devel wrote: > On Jul 31 17:06, Alex Bennée wrote: >> We need to keep a local per-cpu copy of the data as other threads may >> be running. We use a automatically growing array and re-use the space >> for subsequent queries. > > [...] > >> +bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, >> + bool is_store, struct qemu_plugin_hwaddr *data) >> +{ >> + CPUArchState *env = cpu->env_ptr; >> + CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr); >> + target_ulong tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read; >> + >> + if (tlb_hit(tlb_addr, addr)) { >> + if (tlb_addr & TLB_MMIO) { >> + data->hostaddr = 0; >> + data->is_io = true; >> + /* XXX: lookup device */ >> + } else { >> + data->hostaddr = addr + tlbe->addend; >> + data->is_io = false; >> + } >> + return true; >> + } >> + return false; >> +} > > In what cases do you expect tlb_hit() should not evaluate to true here? > Will returns of false only be in error cases, or do you expect it can > occur during normal operation? In particular, I'm interested in ensuring > this is as reliable as possible, since some plugins may require physical > addresses. I have the same question. Given the access has just succeeded, it would seem to be that the tlb entry *must* hit. No victim tlb check or anything. r~
Aaron Lindsay OS <aaron@os.amperecomputing.com> writes: > On Jul 31 17:06, Alex Bennée wrote: >> We need to keep a local per-cpu copy of the data as other threads may >> be running. We use a automatically growing array and re-use the space >> for subsequent queries. > > [...] > >> +bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, >> + bool is_store, struct qemu_plugin_hwaddr *data) >> +{ >> + CPUArchState *env = cpu->env_ptr; >> + CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr); >> + target_ulong tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read; >> + >> + if (tlb_hit(tlb_addr, addr)) { >> + if (tlb_addr & TLB_MMIO) { >> + data->hostaddr = 0; >> + data->is_io = true; >> + /* XXX: lookup device */ >> + } else { >> + data->hostaddr = addr + tlbe->addend; >> + data->is_io = false; >> + } >> + return true; >> + } >> + return false; >> +} > > In what cases do you expect tlb_hit() should not evaluate to true here? > Will returns of false only be in error cases, or do you expect it can > occur during normal operation? In particular, I'm interested in ensuring > this is as reliable as possible, since some plugins may require physical > addresses. Only if the API is misused and a call made outside of the hooked memory operation. An assert would be too strong as the plugin could then bring down QEMU - I guess we could use some sort of error_report... > >> +struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info, >> + uint64_t vaddr) >> +{ >> + CPUState *cpu = current_cpu; >> + unsigned int mmu_idx = info >> TRACE_MEM_MMU_SHIFT; >> + struct qemu_plugin_hwaddr *hwaddr; >> + >> + /* Ensure we have memory allocated for this work */ >> + if (!hwaddr_refs) { >> + hwaddr_refs = g_array_sized_new(false, true, >> + sizeof(struct qemu_plugin_hwaddr), >> + cpu->cpu_index + 1); >> + } else if (cpu->cpu_index >= hwaddr_refs->len) { >> + hwaddr_refs = g_array_set_size(hwaddr_refs, cpu->cpu_index + 1); >> + } > > Are there one or more race conditions with the allocations here? If so, > could they be solved by doing the allocations at plugin initialization > and when the number of online cpu's changes, instead of lazily? It might be easier to just keep a __thread local array here and let TLS deal with it. -- Alex Bennée
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index f7c0290639c..f37e89c806d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1130,6 +1130,38 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr, return (void *)((uintptr_t)addr + entry->addend); } + +#ifdef CONFIG_PLUGIN +/* + * Perform a TLB lookup and populate the qemu_plugin_hwaddr structure. + * This should be a hot path as we will have just looked this path up + * in the softmmu lookup code (or helper). We don't handle re-fills or + * checking the victim table. This is purely informational. + */ + +bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, + bool is_store, struct qemu_plugin_hwaddr *data) +{ + CPUArchState *env = cpu->env_ptr; + CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr); + target_ulong tlb_addr = is_store ? tlb_addr_write(tlbe) : tlbe->addr_read; + + if (tlb_hit(tlb_addr, addr)) { + if (tlb_addr & TLB_MMIO) { + data->hostaddr = 0; + data->is_io = true; + /* XXX: lookup device */ + } else { + data->hostaddr = addr + tlbe->addend; + data->is_io = false; + } + return true; + } + return false; +} + +#endif + /* Probe for a read-modify-write atomic operation. Do not allow unaligned * operations, or io operations to proceed. Return the host address. */ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr, diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 90045e77c1f..c42626e35b1 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -262,6 +262,17 @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr, int mmu_idx, target_ulong size); void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx, uintptr_t retaddr); + +/** + * tlb_plugin_lookup: query last TLB lookup + * @cpu: cpu environment + * + * This function can be used directly after a memory operation to + * query information about the access. It is used by the plugin + * infrastructure to expose more information about the address. + */ +bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx, + bool is_store, struct qemu_plugin_hwaddr *data); #else static inline void tlb_init(CPUState *cpu) { @@ -311,6 +322,12 @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap) { } +static inline bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, + int mmu_idx, bool is_store, + struct qemu_plugin_hwaddr *data) +{ + return false; +} #endif #define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */ diff --git a/include/qemu/plugin.h b/include/qemu/plugin.h index 3c46a241669..657345df60c 100644 --- a/include/qemu/plugin.h +++ b/include/qemu/plugin.h @@ -182,6 +182,12 @@ struct qemu_plugin_insn *qemu_plugin_tb_insn_get(struct qemu_plugin_tb *tb) return insn; } +struct qemu_plugin_hwaddr { + uint64_t hostaddr; + bool is_io; +}; + + #ifdef CONFIG_PLUGIN void qemu_plugin_vcpu_init_hook(CPUState *cpu); diff --git a/plugins/api.c b/plugins/api.c index 586bb8789f1..4b3ac9e31fb 100644 --- a/plugins/api.c +++ b/plugins/api.c @@ -39,7 +39,7 @@ #include "cpu.h" #include "sysemu/sysemu.h" #include "tcg/tcg.h" -#include "trace/mem-internal.h" /* mem_info macros */ +#include "exec/exec-all.h" #include "plugin.h" #ifndef CONFIG_USER_ONLY #include "hw/boards.h" @@ -240,11 +240,42 @@ bool qemu_plugin_mem_is_store(qemu_plugin_meminfo_t info) * Virtual Memory queries */ +#ifdef CONFIG_SOFTMMU +static GArray *hwaddr_refs; + +struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info, + uint64_t vaddr) +{ + CPUState *cpu = current_cpu; + unsigned int mmu_idx = info >> TRACE_MEM_MMU_SHIFT; + struct qemu_plugin_hwaddr *hwaddr; + + /* Ensure we have memory allocated for this work */ + if (!hwaddr_refs) { + hwaddr_refs = g_array_sized_new(false, true, + sizeof(struct qemu_plugin_hwaddr), + cpu->cpu_index + 1); + } else if (cpu->cpu_index >= hwaddr_refs->len) { + hwaddr_refs = g_array_set_size(hwaddr_refs, cpu->cpu_index + 1); + } + + hwaddr = &g_array_index(hwaddr_refs, struct qemu_plugin_hwaddr, + cpu->cpu_index); + + if (!tlb_plugin_lookup(cpu, vaddr, mmu_idx, + info & TRACE_MEM_ST, hwaddr)) { + return NULL; + } + + return hwaddr; +} +#else struct qemu_plugin_hwaddr *qemu_plugin_get_hwaddr(qemu_plugin_meminfo_t info, uint64_t vaddr) { return NULL; } +#endif bool qemu_plugin_hwaddr_is_io(struct qemu_plugin_hwaddr *hwaddr) { @@ -253,14 +284,15 @@ bool qemu_plugin_hwaddr_is_io(struct qemu_plugin_hwaddr *hwaddr) uint64_t qemu_plugin_hwaddr_to_raddr(const struct qemu_plugin_hwaddr *haddr) { -#if 0 /* XXX FIXME should be SOFTMMU */ - ram_addr_t ram_addr; - - g_assert(haddr); - ram_addr = qemu_ram_addr_from_host(haddr); - if (ram_addr == RAM_ADDR_INVALID) { - error_report("Bad ram pointer %p", haddr); - abort(); +#ifdef CONFIG_SOFTMMU + ram_addr_t ram_addr = 0; + + if (haddr && !haddr->is_io) { + ram_addr = qemu_ram_addr_from_host((void *) haddr->hostaddr); + if (ram_addr == RAM_ADDR_INVALID) { + error_report("Bad ram pointer %"PRIx64"", haddr->hostaddr); + abort(); + } } return ram_addr; #else
We need to keep a local per-cpu copy of the data as other threads may be running. We use a automatically growing array and re-use the space for subsequent queries. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> --- accel/tcg/cputlb.c | 32 ++++++++++++++++++++++++++ include/exec/exec-all.h | 17 ++++++++++++++ include/qemu/plugin.h | 6 +++++ plugins/api.c | 50 +++++++++++++++++++++++++++++++++-------- 4 files changed, 96 insertions(+), 9 deletions(-) -- 2.20.1