From patchwork Thu Oct 29 18:34:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Fischofer X-Patchwork-Id: 55796 Delivered-To: patch@linaro.org Received: by 10.112.61.134 with SMTP id p6csp724999lbr; Thu, 29 Oct 2015 11:35:11 -0700 (PDT) X-Received: by 10.50.118.105 with SMTP id kl9mr4374503igb.24.1446143711065; Thu, 29 Oct 2015 11:35:11 -0700 (PDT) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id c13si4342774igo.41.2015.10.29.11.35.10; Thu, 29 Oct 2015 11:35:11 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@linaro_org.20150623.gappssmtp.com Received: by lists.linaro.org (Postfix, from userid 109) id 5944862023; Thu, 29 Oct 2015 18:35:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, T_DKIM_INVALID, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id B656662C16; Thu, 29 Oct 2015 18:34:34 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 08A0562C0B; Thu, 29 Oct 2015 18:34:32 +0000 (UTC) Received: from mail-oi0-f54.google.com (mail-oi0-f54.google.com [209.85.218.54]) by lists.linaro.org (Postfix) with ESMTPS id 5FC5362028 for ; Thu, 29 Oct 2015 18:34:30 +0000 (UTC) Received: by oiad129 with SMTP id d129so45791384oia.0 for ; Thu, 29 Oct 2015 11:34:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YiGUYHvln1BITLS9jc25/jvPoJ6o3Y3ww0mD9NuRWuM=; b=n4+vVTBlrCGfC15V+x52w6JqylNpdBsuc1+qNP7gzM7UBMSOM+zmTR+9qWGXUzlx0h u8k9QZ+YglG0fIQBSUmL3oTpqt27b5jiHqZA2gtPhAqDO/wCcLWKJKdtm9TcNcTJI8rg Ey9ao0GxgWT2nfQ8DyWP/bH/heFcHSL/b/FpO7KABeqMtU9c/JucYPP7B2EBmMwW3OUi vonOE26+WGte8Qld1inYmfJJIjxeeLRUWqx8I4CzbyWZ9jKTDXkl1ZGcMMqoGEgRZO0F sxP2YgnKt2udJUCgfByggjOjQI8Ks749AqDrFEuE4oriPA8k8K+AfQxrY3ppm5l3h1ut Ct5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YiGUYHvln1BITLS9jc25/jvPoJ6o3Y3ww0mD9NuRWuM=; b=UVUriyR+qQPAJIhZE9T2NTBuYIMzi74AIqJg1MXhdpEs+aS/qma+61fL4XGuWYCJlT jRyr+vZI1g6vran+yoA+TNF+S4X3nup6aYfSZdsvm4YpB3l33PQcIZIKbsEJ4gMk6HaS gRLMtZYmXD8gpZkuFkfpiQtJGLy+LTzITmj0lzX9uDETTrASvO+qogIDXliYiuKP2Jl7 0oeJouwfWLWHYueUEFKaGve6OcCxVM0lLfduM2J/U8JPzz0rxOCeof481EGqTMqOJLrJ OKZTR7hn7HwjJ1EsoNdB/yM5ZJTRyNFb/mo+TYcrBRmrNeeNHKQ+BrOS60EM1OvPPLnr h4zQ== X-Gm-Message-State: ALoCoQmaw+yoYrftJ/VU4AyA2ytrnmLuG+I6InXziPj7NukpF2svWvrBfZBqQEmwbC3UNVcoHVjd X-Received: by 10.202.206.212 with SMTP id e203mr2410866oig.132.1446143669744; Thu, 29 Oct 2015 11:34:29 -0700 (PDT) Received: from Ubuntu15.localdomain (cpe-66-68-129-43.austin.res.rr.com. [66.68.129.43]) by smtp.gmail.com with ESMTPSA id my2sm1042791obb.17.2015.10.29.11.34.28 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Oct 2015 11:34:29 -0700 (PDT) From: Bill Fischofer To: lng-odp@lists.linaro.org, maxim.uvarov@linaro.org, mike.holmes@linaro.org Date: Thu, 29 Oct 2015 13:34:23 -0500 Message-Id: <1446143663-3781-2-git-send-email-bill.fischofer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1446143663-3781-1-git-send-email-bill.fischofer@linaro.org> References: <1446143663-3781-1-git-send-email-bill.fischofer@linaro.org> X-Topics: patch Subject: [lng-odp] [API-NEXT PATCH 2/2] validation: synchronizers: add recursive lock tests X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Add tests for validating odp_spinlock_recursive and odp_rwlock_recursive lock types. Signed-off-by: Bill Fischofer --- test/validation/synchronizers/synchronizers.c | 386 ++++++++++++++++++++++++++ test/validation/synchronizers/synchronizers.h | 6 + 2 files changed, 392 insertions(+) diff --git a/test/validation/synchronizers/synchronizers.c b/test/validation/synchronizers/synchronizers.c index 6a8d79f..cebe0d2 100644 --- a/test/validation/synchronizers/synchronizers.c +++ b/test/validation/synchronizers/synchronizers.c @@ -63,8 +63,10 @@ typedef struct { /* Locks */ odp_spinlock_t global_spinlock; + odp_spinlock_recursive_t global_recursive_spinlock; odp_ticketlock_t global_ticketlock; odp_rwlock_t global_rwlock; + odp_rwlock_recursive_t global_recursive_rwlock; volatile_u32_t global_lock_owner; } global_shared_mem_t; @@ -77,8 +79,10 @@ typedef struct { int thread_core; odp_spinlock_t per_thread_spinlock; + odp_spinlock_recursive_t per_thread_recursive_spinlock; odp_ticketlock_t per_thread_ticketlock; odp_rwlock_t per_thread_rwlock; + odp_rwlock_recursive_t per_thread_recursive_rwlock; volatile_u64_t delay_counter; } per_thread_mem_t; @@ -314,6 +318,56 @@ static void *spinlock_api_tests(void *arg UNUSED) return NULL; } +static void spinlock_recursive_api_test(odp_spinlock_recursive_t *spinlock) +{ + odp_spinlock_recursive_init(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); + + odp_spinlock_recursive_lock(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); + + odp_spinlock_recursive_lock(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); + + odp_spinlock_recursive_unlock(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); + + odp_spinlock_recursive_unlock(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); + + CU_ASSERT(odp_spinlock_recursive_trylock(spinlock) == 1); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); + + CU_ASSERT(odp_spinlock_recursive_trylock(spinlock) == 1); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); + + odp_spinlock_recursive_unlock(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 1); + + odp_spinlock_recursive_unlock(spinlock); + CU_ASSERT(odp_spinlock_recursive_is_locked(spinlock) == 0); +} + +static void *spinlock_recursive_api_tests(void *arg UNUSED) +{ + global_shared_mem_t *global_mem; + per_thread_mem_t *per_thread_mem; + odp_spinlock_recursive_t local_recursive_spin_lock; + + per_thread_mem = thread_init(); + global_mem = per_thread_mem->global_mem; + + odp_barrier_wait(&global_mem->global_barrier); + + spinlock_recursive_api_test(&local_recursive_spin_lock); + spinlock_recursive_api_test( + &per_thread_mem->per_thread_recursive_spinlock); + + thread_finalize(per_thread_mem); + + return NULL; +} + static void ticketlock_api_test(odp_ticketlock_t *ticketlock) { odp_ticketlock_init(ticketlock); @@ -386,6 +440,45 @@ static void *rwlock_api_tests(void *arg UNUSED) return NULL; } +static void rwlock_recursive_api_test(odp_rwlock_recursive_t *rw_lock) +{ + odp_rwlock_recursive_init(rw_lock); + /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ + + odp_rwlock_recursive_read_lock(rw_lock); + odp_rwlock_recursive_read_lock(rw_lock); + + odp_rwlock_recursive_read_unlock(rw_lock); + odp_rwlock_recursive_read_unlock(rw_lock); + + odp_rwlock_recursive_write_lock(rw_lock); + odp_rwlock_recursive_write_lock(rw_lock); + /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 1); */ + + odp_rwlock_recursive_write_unlock(rw_lock); + odp_rwlock_recursive_write_unlock(rw_lock); + /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */ +} + +static void *rwlock_recursive_api_tests(void *arg UNUSED) +{ + global_shared_mem_t *global_mem; + per_thread_mem_t *per_thread_mem; + odp_rwlock_recursive_t local_recursive_rwlock; + + per_thread_mem = thread_init(); + global_mem = per_thread_mem->global_mem; + + odp_barrier_wait(&global_mem->global_barrier); + + rwlock_recursive_api_test(&local_recursive_rwlock); + rwlock_recursive_api_test(&per_thread_mem->per_thread_recursive_rwlock); + + thread_finalize(per_thread_mem); + + return NULL; +} + static void *no_lock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; @@ -543,6 +636,115 @@ static void *spinlock_functional_test(void *arg UNUSED) return NULL; } +static void *spinlock_recursive_functional_test(void *arg UNUSED) +{ + global_shared_mem_t *global_mem; + per_thread_mem_t *per_thread_mem; + uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; + uint32_t sync_failures, recursive_errs, is_locked_errs, current_errs; + uint32_t lock_owner_delay; + + thread_num = odp_cpu_id() + 1; + per_thread_mem = thread_init(); + global_mem = per_thread_mem->global_mem; + iterations = global_mem->g_iterations; + + odp_barrier_wait(&global_mem->global_barrier); + + sync_failures = 0; + recursive_errs = 0; + is_locked_errs = 0; + current_errs = 0; + rs_idx = 0; + resync_cnt = iterations / NUM_RESYNC_BARRIERS; + lock_owner_delay = BASE_DELAY; + + for (cnt = 1; cnt <= iterations; cnt++) { + /* Acquire the shared global lock */ + odp_spinlock_recursive_lock( + &global_mem->global_recursive_spinlock); + + /* Make sure we have the lock AND didn't previously own it */ + if (odp_spinlock_recursive_is_locked( + &global_mem->global_recursive_spinlock) != 1) + is_locked_errs++; + + if (global_mem->global_lock_owner != 0) { + current_errs++; + sync_failures++; + } + + /* Now set the global_lock_owner to be us, wait a while, and + * then we see if anyone else has snuck in and changed the + * global_lock_owner to be themselves + */ + global_mem->global_lock_owner = thread_num; + odp_sync_stores(); + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != thread_num) { + current_errs++; + sync_failures++; + } + + /* Verify that we can acquire the lock recursively */ + odp_spinlock_recursive_lock( + &global_mem->global_recursive_spinlock); + if (global_mem->global_lock_owner != thread_num) { + current_errs++; + recursive_errs++; + } + + /* Release the lock and verify that we still have it*/ + odp_spinlock_recursive_unlock( + &global_mem->global_recursive_spinlock); + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != thread_num) { + current_errs++; + recursive_errs++; + } + + /* Release shared lock, and make sure we no longer have it */ + global_mem->global_lock_owner = 0; + odp_sync_stores(); + odp_spinlock_recursive_unlock( + &global_mem->global_recursive_spinlock); + if (global_mem->global_lock_owner == thread_num) { + current_errs++; + sync_failures++; + } + + if (current_errs == 0) + lock_owner_delay++; + + /* Wait a small amount of time and rerun the test */ + thread_delay(per_thread_mem, BASE_DELAY); + + /* Try to resync all of the threads to increase contention */ + if ((rs_idx < NUM_RESYNC_BARRIERS) && + ((cnt % resync_cnt) == (resync_cnt - 1))) + odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); + } + + if ((global_mem->g_verbose) && + (sync_failures != 0 || recursive_errs != 0 || is_locked_errs != 0)) + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " sync_failures and %" PRIu32 + " recursive_errs and %" PRIu32 + " is_locked_errs in %" PRIu32 + " iterations\n", thread_num, + per_thread_mem->thread_id, per_thread_mem->thread_core, + sync_failures, recursive_errs, is_locked_errs, + iterations); + + CU_ASSERT(sync_failures == 0); + CU_ASSERT(recursive_errs == 0); + CU_ASSERT(is_locked_errs == 0); + + thread_finalize(per_thread_mem); + + return NULL; +} + static void *ticketlock_functional_test(void *arg UNUSED) { global_shared_mem_t *global_mem; @@ -721,6 +923,136 @@ static void *rwlock_functional_test(void *arg UNUSED) return NULL; } +static void *rwlock_recursive_functional_test(void *arg UNUSED) +{ + global_shared_mem_t *global_mem; + per_thread_mem_t *per_thread_mem; + uint32_t thread_num, resync_cnt, rs_idx, iterations, cnt; + uint32_t sync_failures, recursive_errs, current_errs, lock_owner_delay; + + thread_num = odp_cpu_id() + 1; + per_thread_mem = thread_init(); + global_mem = per_thread_mem->global_mem; + iterations = global_mem->g_iterations; + + /* Wait here until all of the threads have also reached this point */ + odp_barrier_wait(&global_mem->global_barrier); + + sync_failures = 0; + recursive_errs = 0; + current_errs = 0; + rs_idx = 0; + resync_cnt = iterations / NUM_RESYNC_BARRIERS; + lock_owner_delay = BASE_DELAY; + + for (cnt = 1; cnt <= iterations; cnt++) { + /* Verify that we can obtain a read lock */ + odp_rwlock_recursive_read_lock( + &global_mem->global_recursive_rwlock); + + /* Verify lock is unowned (no writer holds it) */ + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != 0) { + current_errs++; + sync_failures++; + } + + /* Verify we can get read lock recursively */ + odp_rwlock_recursive_read_lock( + &global_mem->global_recursive_rwlock); + + /* Verify lock is unowned (no writer holds it) */ + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != 0) { + current_errs++; + sync_failures++; + } + + /* Release the read lock */ + odp_rwlock_recursive_read_unlock( + &global_mem->global_recursive_rwlock); + odp_rwlock_recursive_read_unlock( + &global_mem->global_recursive_rwlock); + + /* Acquire the shared global lock */ + odp_rwlock_recursive_write_lock( + &global_mem->global_recursive_rwlock); + + /* Make sure we have lock now AND didn't previously own it */ + if (global_mem->global_lock_owner != 0) { + current_errs++; + sync_failures++; + } + + /* Now set the global_lock_owner to be us, wait a while, and + * then we see if anyone else has snuck in and changed the + * global_lock_owner to be themselves + */ + global_mem->global_lock_owner = thread_num; + odp_sync_stores(); + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != thread_num) { + current_errs++; + sync_failures++; + } + + /* Acquire it again and verify we still own it */ + odp_rwlock_recursive_write_lock( + &global_mem->global_recursive_rwlock); + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != thread_num) { + current_errs++; + recursive_errs++; + } + + /* Release the recursive lock and make sure we still own it */ + odp_rwlock_recursive_write_unlock( + &global_mem->global_recursive_rwlock); + thread_delay(per_thread_mem, lock_owner_delay); + if (global_mem->global_lock_owner != thread_num) { + current_errs++; + recursive_errs++; + } + + /* Release shared lock, and make sure we no longer have it */ + global_mem->global_lock_owner = 0; + odp_sync_stores(); + odp_rwlock_recursive_write_unlock( + &global_mem->global_recursive_rwlock); + if (global_mem->global_lock_owner == thread_num) { + current_errs++; + sync_failures++; + } + + if (current_errs == 0) + lock_owner_delay++; + + /* Wait a small amount of time and then rerun the test */ + thread_delay(per_thread_mem, BASE_DELAY); + + /* Try to resync all of the threads to increase contention */ + if ((rs_idx < NUM_RESYNC_BARRIERS) && + ((cnt % resync_cnt) == (resync_cnt - 1))) + odp_barrier_wait(&global_mem->barrier_array[rs_idx++]); + } + + if ((global_mem->g_verbose) && (sync_failures != 0)) + printf("\nThread %" PRIu32 " (id=%d core=%d) had %" PRIu32 + " sync_failures and %" PRIu32 + " recursive_errs in %" PRIu32 + " iterations\n", thread_num, + per_thread_mem->thread_id, + per_thread_mem->thread_core, + sync_failures, recursive_errs, iterations); + + CU_ASSERT(sync_failures == 0); + CU_ASSERT(recursive_errs == 0); + + thread_finalize(per_thread_mem); + + return NULL; +} + static void barrier_test_init(void) { uint32_t num_threads, idx; @@ -996,12 +1328,37 @@ void synchronizers_test_spinlock_functional(void) odp_cunit_thread_exit(&arg); } +void synchronizers_test_spinlock_recursive_api(void) +{ + pthrd_arg arg; + + arg.numthrds = global_mem->g_num_threads; + odp_cunit_thread_create(spinlock_recursive_api_tests, &arg); + odp_cunit_thread_exit(&arg); +} + +void synchronizers_test_spinlock_recursive_functional(void) +{ + pthrd_arg arg; + + arg.numthrds = global_mem->g_num_threads; + odp_spinlock_recursive_init(&global_mem->global_recursive_spinlock); + odp_cunit_thread_create(spinlock_recursive_functional_test, &arg); + odp_cunit_thread_exit(&arg); +} + odp_testinfo_t synchronizers_suite_spinlock[] = { ODP_TEST_INFO(synchronizers_test_spinlock_api), ODP_TEST_INFO(synchronizers_test_spinlock_functional), ODP_TEST_INFO_NULL }; +odp_testinfo_t synchronizers_suite_spinlock_recursive[] = { + ODP_TEST_INFO(synchronizers_test_spinlock_recursive_api), + ODP_TEST_INFO(synchronizers_test_spinlock_recursive_functional), + ODP_TEST_INFO_NULL +}; + /* Ticket lock tests */ void synchronizers_test_ticketlock_api(void) { @@ -1055,6 +1412,31 @@ odp_testinfo_t synchronizers_suite_rwlock[] = { ODP_TEST_INFO_NULL }; +void synchronizers_test_rwlock_recursive_api(void) +{ + pthrd_arg arg; + + arg.numthrds = global_mem->g_num_threads; + odp_cunit_thread_create(rwlock_recursive_api_tests, &arg); + odp_cunit_thread_exit(&arg); +} + +void synchronizers_test_rwlock_recursive_functional(void) +{ + pthrd_arg arg; + + arg.numthrds = global_mem->g_num_threads; + odp_rwlock_recursive_init(&global_mem->global_recursive_rwlock); + odp_cunit_thread_create(rwlock_recursive_functional_test, &arg); + odp_cunit_thread_exit(&arg); +} + +odp_testinfo_t synchronizers_suite_rwlock_recursive[] = { + ODP_TEST_INFO(synchronizers_test_rwlock_recursive_api), + ODP_TEST_INFO(synchronizers_test_rwlock_recursive_functional), + ODP_TEST_INFO_NULL +}; + int synchronizers_suite_init(void) { uint32_t num_threads, idx; @@ -1216,10 +1598,14 @@ odp_suiteinfo_t synchronizers_suites[] = { synchronizers_suite_no_locking}, {"spinlock", synchronizers_suite_init, NULL, synchronizers_suite_spinlock}, + {"spinlock_recursive", synchronizers_suite_init, NULL, + synchronizers_suite_spinlock_recursive}, {"ticketlock", synchronizers_suite_init, NULL, synchronizers_suite_ticketlock}, {"rwlock", synchronizers_suite_init, NULL, synchronizers_suite_rwlock}, + {"rwlock_recursive", synchronizers_suite_init, NULL, + synchronizers_suite_rwlock_recursive}, {"atomic", NULL, NULL, synchronizers_suite_atomic}, ODP_SUITE_INFO_NULL diff --git a/test/validation/synchronizers/synchronizers.h b/test/validation/synchronizers/synchronizers.h index f16477c..9725996 100644 --- a/test/validation/synchronizers/synchronizers.h +++ b/test/validation/synchronizers/synchronizers.h @@ -15,10 +15,14 @@ void synchronizers_test_barrier_functional(void); void synchronizers_test_no_lock_functional(void); void synchronizers_test_spinlock_api(void); void synchronizers_test_spinlock_functional(void); +void synchronizers_test_spinlock_recursive_api(void); +void synchronizers_test_spinlock_recursive_functional(void); void synchronizers_test_ticketlock_api(void); void synchronizers_test_ticketlock_functional(void); void synchronizers_test_rwlock_api(void); void synchronizers_test_rwlock_functional(void); +void synchronizers_test_rwlock_recursive_api(void); +void synchronizers_test_rwlock_recursive_functional(void); void synchronizers_test_atomic_inc_dec(void); void synchronizers_test_atomic_add_sub(void); void synchronizers_test_atomic_fetch_inc_dec(void); @@ -28,8 +32,10 @@ void synchronizers_test_atomic_fetch_add_sub(void); extern odp_testinfo_t synchronizers_suite_barrier[]; extern odp_testinfo_t synchronizers_suite_no_locking[]; extern odp_testinfo_t synchronizers_suite_spinlock[]; +extern odp_testinfo_t synchronizers_suite_spinlock_recursive[]; extern odp_testinfo_t synchronizers_suite_ticketlock[]; extern odp_testinfo_t synchronizers_suite_rwlock[]; +extern odp_testinfo_t synchronizers_suite_rwlock_recursive[]; extern odp_testinfo_t synchronizers_suite_atomic[]; /* test array init/term functions: */