From patchwork Tue Jan 3 15:10:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christophe Milard X-Patchwork-Id: 89661 Delivered-To: patch@linaro.org Received: by 10.140.20.101 with SMTP id 92csp8024802qgi; Tue, 3 Jan 2017 06:16:32 -0800 (PST) X-Received: by 10.55.221.151 with SMTP id u23mr60806996qku.315.1483452992503; Tue, 03 Jan 2017 06:16:32 -0800 (PST) Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id q69si26407624qkl.330.2017.01.03.06.16.32; Tue, 03 Jan 2017 06:16:32 -0800 (PST) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 295CB60C0D; Tue, 3 Jan 2017 14:16:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id EBA0460C54; Tue, 3 Jan 2017 14:11:41 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 9EDE660C0E; Tue, 3 Jan 2017 14:11:14 +0000 (UTC) Received: from mail-lf0-f52.google.com (mail-lf0-f52.google.com [209.85.215.52]) by lists.linaro.org (Postfix) with ESMTPS id 132A260C0A for ; Tue, 3 Jan 2017 14:11:08 +0000 (UTC) Received: by mail-lf0-f52.google.com with SMTP id b14so285858817lfg.2 for ; Tue, 03 Jan 2017 06:11:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tNqQ8XCNG7TTdJUM71yZ5raPSQNfjlihz9pnrXQ20V4=; b=PgHbuDeZWh/rXEnSZbj3FCMiDK+NhV/tTEi0fBBczUe5yTeMvwa/piLW1Wq3/QEz7g EbYonjaQMKy3npHG48YcVc47i3SvQwmO7V04DmCZDttlcF4rtEoA0zoEO1V6bhuN9J3/ RzGn4iPJKTfxf5KLRudXeiP3sli/J8XTIEjj/mhw+Nb3j9AfaTFDkAkhj17j6Vx+sR5L JfpfZIFU+afxsfS1ATPEUdXyqcmuzT8XQjNktkoC4GOd1cIhgr+Vvf+A2O8ISPiTxALH UbcyDIITyn2XlLo4Z/APXbktacCVf2t+CfnGMDbdUzCjTLe+mdINdcGkuhOfHNCPR86r 4cKA== X-Gm-Message-State: AIkVDXJVo4kcE6AxaFDCs4bpZLEHKFo5vFwqCon9RNkxmFoYTr4VFmfh2C6OfAIItQhJOAaPhOU= X-Received: by 10.46.5.150 with SMTP id 144mr18803032ljf.17.1483452666905; Tue, 03 Jan 2017 06:11:06 -0800 (PST) Received: from erachmi-ericsson.ki.sw.ericsson.se (c-83-233-76-66.cust.bredband2.com. [83.233.76.66]) by smtp.gmail.com with ESMTPSA id a184sm16760057lfb.34.2017.01.03.06.11.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 03 Jan 2017 06:11:06 -0800 (PST) From: Christophe Milard To: mike.holmes@linaro.org, bill.fischofer@linaro.org, yi.he@linaro.org, forrest.shi@linaro.org, francois.ozog@linaro.org, lng-odp@lists.linaro.org Date: Tue, 3 Jan 2017 16:10:15 +0100 Message-Id: <1483456215-40789-7-git-send-email-christophe.milard@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1483456215-40789-1-git-send-email-christophe.milard@linaro.org> References: <1483456215-40789-1-git-send-email-christophe.milard@linaro.org> Subject: [lng-odp] [API-NEXTv5 6/6] test: drv: shm: adding buddy allocation stress tests X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Stress tests for the random size allocator (buddy allocator in linux-generic) are added here. Signed-off-by: Christophe Milard --- .../common_plat/validation/drv/drvshmem/drvshmem.c | 177 +++++++++++++++++++++ .../common_plat/validation/drv/drvshmem/drvshmem.h | 1 + 2 files changed, 178 insertions(+) -- 2.7.4 diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.c b/test/common_plat/validation/drv/drvshmem/drvshmem.c index d4dedea..0f882ae 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.c +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.c @@ -938,6 +938,182 @@ void drvshmem_test_slab_basic(void) odpdrv_shm_pool_destroy(pool); } +/* + * thread part for the drvshmem_test_buddy_stress + */ +static int run_test_buddy_stress(void *arg ODP_UNUSED) +{ + odpdrv_shm_t shm; + odpdrv_shm_pool_t pool; + uint8_t *address; + shared_test_data_t *glob_data; + uint8_t random_bytes[STRESS_RANDOM_SZ]; + uint32_t index; + uint32_t size; + uint8_t data; + uint32_t iter; + uint32_t i; + + shm = odpdrv_shm_lookup_by_name(MEM_NAME); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + /* get the pool to test */ + pool = odpdrv_shm_pool_lookup(POOL_NAME); + + /* wait for general GO! */ + odpdrv_barrier_wait(&glob_data->test_barrier1); + /* + + * at each iteration: pick up a random index for + * glob_data->stress[index]: If the entry is free, allocated small mem + * randomly. If it is already allocated, make checks and free it: + * Note that different tread can allocate or free a given block + */ + for (iter = 0; iter < STRESS_ITERATION; iter++) { + /* get 4 random bytes from which index, size ,align, flags + * and data will be derived: + */ + odp_random_data(random_bytes, STRESS_RANDOM_SZ, 0); + index = random_bytes[0] & (STRESS_SIZE - 1); + + odp_spinlock_lock(&glob_data->stress_lock); + + switch (glob_data->stress[index].state) { + case STRESS_FREE: + /* allocated a new block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + + size = (random_bytes[1] + 1) << 4; /* up to 4Kb */ + data = random_bytes[2]; + + address = odpdrv_shm_pool_alloc(pool, size); + glob_data->stress[index].address = address; + if (address == NULL) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + glob_data->stress[index].size = size; + glob_data->stress[index].data_val = data; + + /* write some data: */ + for (i = 0; i < size; i++) + address[i] = (data++) & 0xFF; + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_ALLOC: + /* free the block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + address = glob_data->stress[index].address; + + if (shm == NULL) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + /* check that data is reachable and correct: */ + data = glob_data->stress[index].data_val; + size = glob_data->stress[index].size; + for (i = 0; i < size; i++) { + CU_ASSERT(address[i] == (data & 0xFF)); + data++; + } + + odpdrv_shm_pool_free(pool, address); + + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_BUSY: + default: + odp_spinlock_unlock(&glob_data->stress_lock); + break; + } + } + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * stress tests + */ +void drvshmem_test_buddy_stress(void) +{ + odpdrv_shm_pool_param_t pool_params; + odpdrv_shm_pool_t pool; + pthrd_arg thrdarg; + odpdrv_shm_t shm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + uint32_t i; + uint8_t *address; + + /* create a pool and check that it can be looked up */ + pool_params.pool_size = POOL_SZ; + pool_params.min_alloc = 0; + pool_params.max_alloc = POOL_SZ; + pool = odpdrv_shm_pool_create(POOL_NAME, &pool_params); + odpdrv_shm_pool_print("Stress test start", pool); + + shm = odpdrv_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), + 0, ODPDRV_SHM_LOCK); + CU_ASSERT(ODPDRV_SHM_INVALID != shm); + glob_data = odpdrv_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + glob_data->nb_threads = thrdarg.numthrds; + odpdrv_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds); + odp_spinlock_init(&glob_data->stress_lock); + + /* before starting the threads, mark all entries as free: */ + for (i = 0; i < STRESS_SIZE; i++) + glob_data->stress[i].state = STRESS_FREE; + + /* create threads */ + odp_cunit_thread_create(run_test_buddy_stress, &thrdarg); + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + odpdrv_shm_pool_print("Stress test all thread finished", pool); + + /* release left overs: */ + for (i = 0; i < STRESS_SIZE; i++) { + address = glob_data->stress[i].address; + if (glob_data->stress[i].state == STRESS_ALLOC) + odpdrv_shm_pool_free(pool, address); + } + + CU_ASSERT(0 == odpdrv_shm_free_by_name(MEM_NAME)); + + /* check that no memory is left over: */ + odpdrv_shm_pool_print("Stress test all released", pool); + + /* destroy pool: */ + odpdrv_shm_pool_destroy(pool); +} + odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_basic), ODP_TEST_INFO(drvshmem_test_reserve_after_fork), @@ -945,6 +1121,7 @@ odp_testinfo_t drvshmem_suite[] = { ODP_TEST_INFO(drvshmem_test_stress), ODP_TEST_INFO(drvshmem_test_buddy_basic), ODP_TEST_INFO(drvshmem_test_slab_basic), + ODP_TEST_INFO(drvshmem_test_buddy_stress), ODP_TEST_INFO_NULL, }; diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.h b/test/common_plat/validation/drv/drvshmem/drvshmem.h index fdc1080..817b3d5 100644 --- a/test/common_plat/validation/drv/drvshmem/drvshmem.h +++ b/test/common_plat/validation/drv/drvshmem/drvshmem.h @@ -16,6 +16,7 @@ void drvshmem_test_singleva_after_fork(void); void drvshmem_test_stress(void); void drvshmem_test_buddy_basic(void); void drvshmem_test_slab_basic(void); +void drvshmem_test_buddy_stress(void); /* test arrays: */ extern odp_testinfo_t drvshmem_suite[];