Message ID | 20200923151901.745277-2-philmd@redhat.com |
---|---|
State | New |
Headers | show |
Series | qemu/atomic.h: rename atomic_ to qatomic_ | expand |
On Wed, Sep 23, 2020 at 05:19:00PM +0200, Philippe Mathieu-Daudé wrote: > To limit the number of checkpatch errors in the next commit, > clean coding style issues first. > > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> > --- > False positive: > > ERROR: Use of volatile is usually wrong, please add a comment > #11: FILE: include/qemu/atomic.h:328: > +#define atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile *) (p)) > > ERROR: Use of volatile is usually wrong, please add a comment > #12: FILE: include/qemu/atomic.h:329: > +#define atomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile *) (p)) = (i)) > --- > include/qemu/atomic.h | 9 +++++---- > util/bitmap.c | 3 ++- > util/rcu.c | 6 ++++-- > 3 files changed, 11 insertions(+), 7 deletions(-) I already sent a pull request with the patch that renames atomic.h, but this patch can be rebased on top: Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
will be also nice to squash the following on top for a complete clean
checkpatch version, as the original patch introduces at least 1 issue
Carlo
--- >8 ---
Subject: fixup! [PATCH 1/2] qemu/atomic.h: rename atomic_ to qatomic_
fixes:
ERROR: Macros with multiple statements should be enclosed in a do - while loop
+#define qatomic_rcu_read__nocheck(ptr, valptr) \
+ __atomic_load(ptr, valptr, __ATOMIC_RELAXED); \
smp_read_barrier_depends();
false positive:
ERROR: memory barrier without comment
+#define qatomic_xchg(ptr, i) (smp_mb(), __sync_lock_test_and_set(ptr, i))
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
---
include/qemu/atomic.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 87b85f9f6d..be47e083be 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -149,9 +149,10 @@
#define qatomic_rcu_read__nocheck(ptr, valptr) \
__atomic_load(ptr, valptr, __ATOMIC_CONSUME);
#else
-#define qatomic_rcu_read__nocheck(ptr, valptr) \
+#define qatomic_rcu_read__nocheck(ptr, valptr) do { \
__atomic_load(ptr, valptr, __ATOMIC_RELAXED); \
- smp_read_barrier_depends();
+ smp_read_barrier_depends(); \
+} while (0)
#endif
#define qatomic_rcu_read(ptr) \
--
2.28.0.681.g6f77f65b4e
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h index ff72db51154..1774133e5d0 100644 --- a/include/qemu/atomic.h +++ b/include/qemu/atomic.h @@ -325,11 +325,11 @@ /* These will only be atomic if the processor does the fetch or store * in a single issue memory operation */ -#define atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile*) (p)) -#define atomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile*) (p)) = (i)) +#define atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile *) (p)) +#define atomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile *) (p)) = (i)) #define atomic_read(ptr) atomic_read__nocheck(ptr) -#define atomic_set(ptr, i) atomic_set__nocheck(ptr,i) +#define atomic_set(ptr, i) atomic_set__nocheck(ptr, i) /** * atomic_rcu_read - reads a RCU-protected pointer to a local variable @@ -440,7 +440,8 @@ #endif #endif -/* atomic_mb_read/set semantics map Java volatile variables. They are +/* + * atomic_mb_read/set semantics map Java volatile variables. They are * less expensive on some platforms (notably POWER) than fully * sequentially consistent operations. * diff --git a/util/bitmap.c b/util/bitmap.c index 1753ff7f5bd..c4fb86db72a 100644 --- a/util/bitmap.c +++ b/util/bitmap.c @@ -211,7 +211,8 @@ void bitmap_set_atomic(unsigned long *map, long start, long nr) mask_to_set &= BITMAP_LAST_WORD_MASK(size); atomic_or(p, mask_to_set); } else { - /* If we avoided the full barrier in atomic_or(), issue a + /* + * If we avoided the full barrier in atomic_or(), issue a * barrier to account for the assignments in the while loop. */ smp_mb(); diff --git a/util/rcu.c b/util/rcu.c index c4fefa9333e..b5238b8ed02 100644 --- a/util/rcu.c +++ b/util/rcu.c @@ -82,7 +82,8 @@ static void wait_for_readers(void) */ qemu_event_reset(&rcu_gp_event); - /* Instead of using atomic_mb_set for index->waiting, and + /* + * Instead of using atomic_mb_set for index->waiting, and * atomic_mb_read for index->ctr, memory barriers are placed * manually since writes to different threads are independent. * qemu_event_reset has acquire semantics, so no memory barrier @@ -151,7 +152,8 @@ void synchronize_rcu(void) QEMU_LOCK_GUARD(&rcu_registry_lock); if (!QLIST_EMPTY(®istry)) { - /* In either case, the atomic_mb_set below blocks stores that free + /* + * In either case, the atomic_mb_set below blocks stores that free * old RCU-protected pointers. */ if (sizeof(rcu_gp_ctr) < 8) {
To limit the number of checkpatch errors in the next commit, clean coding style issues first. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> --- False positive: ERROR: Use of volatile is usually wrong, please add a comment #11: FILE: include/qemu/atomic.h:328: +#define atomic_read__nocheck(p) (*(__typeof__(*(p)) volatile *) (p)) ERROR: Use of volatile is usually wrong, please add a comment #12: FILE: include/qemu/atomic.h:329: +#define atomic_set__nocheck(p, i) ((*(__typeof__(*(p)) volatile *) (p)) = (i)) --- include/qemu/atomic.h | 9 +++++---- util/bitmap.c | 3 ++- util/rcu.c | 6 ++++-- 3 files changed, 11 insertions(+), 7 deletions(-)