From patchwork Sat Oct 24 16:11:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 318945 Delivered-To: patch@linaro.org Received: by 2002:a92:d1d1:0:0:0:0:0 with SMTP id u17csp1343403ilg; Sat, 24 Oct 2020 09:12:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxQKqcTl0idtmtx8CC9rtgC8+ek04dJsbCoSVejWJUI/KXRmHnP7VzCldR0yUVeKTOai8p3 X-Received: by 2002:a17:906:7016:: with SMTP id n22mr7963053ejj.402.1603555950388; Sat, 24 Oct 2020 09:12:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603555950; cv=none; d=google.com; s=arc-20160816; b=tl/Zlb1ssJGSoDHC+PL65IirQaovKpAjPgFAZTWhR99jt9xawf2C6VTFaFMezdsvvO kbhtTwIuGFqqaiA4WhpCEmv2SIkZAbndYFcLrVA+kb+s3b4hsAIunYnrC3rnVb/IXysg LSgMSv3y/q7ZsVWJlI6g5mHLkoRDlL1zkcy8NG/QGoiLhcdn34OMbRdHai7WhNC7Mlt8 aamALAbZRSN7daFRt3zijhTLxcE+nxmhBqMQQELRaR8UAaS7Xhf28BySjbQXZfxyPGYf M8pTDabNlHqDS0CSQOVwgyFvNs9LXznTifh3lrxbjdkvdp0wpmdvTlcsC3bmF90hfu+L eqSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=lXCWzIBXyCkRNtdyxPwZ7xjOhOHW2WTIMZsUKISEODk=; b=OI4m/rY7m2R0esbi1eHD+Af5IC/orr5yT1OtnNE0FLYghy08gJqbc5EatNF1gmYn/1 KmKz8O76iV/se+TY7zafKJIfiTvZ5Z7Gf24VtufivWDChPebfe6Dnfq1h0EW6xRVOW/P zDtYM2czRu9D7lCtZNb7R1OO4VLPCDkzOxQ4Bi9FqTg5p4BQiwzTImroMPPpWRx0zWbl N1B+2FtUO3gGLB809469eH+Ga7cRc7G7x6Uv+l23PDBNMPBZ+lIZzcDPY6ij15FcrRCJ rZEeMwTsJA1dWq752aDUsGImg1qSN73tlTLpTIWRBO1NsgoV0PKXB6ZQk99aaPlAgX8l TvQg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b8si3107833edn.129.2020.10.24.09.12.30; Sat, 24 Oct 2020 09:12:30 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 11F354C6B; Sat, 24 Oct 2020 18:11:31 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id D324526A8 for ; Sat, 24 Oct 2020 18:11:25 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5FB2B12FC; Sat, 24 Oct 2020 09:11:24 -0700 (PDT) Received: from qc2400f-1.austin.arm.com (qc2400f-1.austin.arm.com [10.118.12.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4B28E3F66B; Sat, 24 Oct 2020 09:11:24 -0700 (PDT) From: Honnappa Nagarahalli To: dev@dpdk.org, honnappa.nagarahalli@arm.com, konstantin.ananyev@intel.com, stephen@networkplumber.org Cc: dharmik.thakkar@arm.com, ruifeng.wang@arm.com, olivier.matz@6wind.com, david.marchand@redhat.com, nd@arm.com Date: Sat, 24 Oct 2020 11:11:07 -0500 Message-Id: <20201024161112.13730-4-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201024161112.13730-1-honnappa.nagarahalli@arm.com> References: <20200224203931.21256-1-honnappa.nagarahalli@arm.com> <20201024161112.13730-1-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v4 3/8] test/ring: add functional tests for zero copy APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add functional tests for zero copy APIs. Test enqueue/dequeue functions are created using the zero copy APIs to fit into the existing testing method. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Dharmik Thakkar Acked-by: Konstantin Ananyev --- app/test/test_ring.c | 196 +++++++++++++++++++++++++++++++++++++++++++ app/test/test_ring.h | 42 ++++++++++ 2 files changed, 238 insertions(+) -- 2.17.1 diff --git a/app/test/test_ring.c b/app/test/test_ring.c index 329d538a9..99fe4b46f 100644 --- a/app/test/test_ring.c +++ b/app/test/test_ring.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2014 Intel Corporation + * Copyright(c) 2020 Arm Limited */ #include @@ -68,6 +69,149 @@ static const int esize[] = {-1, 4, 8, 16, 20}; +/* Wrappers around the zero-copy APIs. The wrappers match + * the normal enqueue/dequeue API declarations. + */ +static unsigned int +test_ring_enqueue_zc_bulk(struct rte_ring *r, void * const *obj_table, + unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_bulk_start(r, n, &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, sizeof(void *), ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_bulk_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_bulk_elem_start(r, esize, n, + &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, esize, ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_burst(struct rte_ring *r, void * const *obj_table, + unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_burst_start(r, n, &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, sizeof(void *), ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_enqueue_zc_burst_elem(struct rte_ring *r, const void *obj_table, + unsigned int esize, unsigned int n, unsigned int *free_space) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_enqueue_zc_burst_elem_start(r, esize, n, + &zcd, free_space); + if (ret > 0) { + /* Copy the data to the ring */ + test_ring_copy_to(&zcd, obj_table, esize, ret); + rte_ring_enqueue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_bulk(struct rte_ring *r, void **obj_table, + unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_bulk_start(r, n, &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, sizeof(void *), ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_bulk_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_bulk_elem_start(r, esize, n, + &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, esize, ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_burst(struct rte_ring *r, void **obj_table, + unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_burst_start(r, n, &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, sizeof(void *), ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + +static unsigned int +test_ring_dequeue_zc_burst_elem(struct rte_ring *r, void *obj_table, + unsigned int esize, unsigned int n, unsigned int *available) +{ + unsigned int ret; + struct rte_ring_zc_data zcd; + + ret = rte_ring_dequeue_zc_burst_elem_start(r, esize, n, + &zcd, available); + if (ret > 0) { + /* Copy the data from the ring */ + test_ring_copy_from(&zcd, obj_table, esize, ret); + rte_ring_dequeue_zc_finish(r, ret); + } + + return ret; +} + static const struct { const char *desc; uint32_t api_type; @@ -219,6 +363,58 @@ static const struct { .felem = rte_ring_dequeue_burst_elem, }, }, + { + .desc = "SP/SC sync mode (ZC)", + .api_type = TEST_RING_ELEM_BULK | TEST_RING_THREAD_SPSC, + .create_flags = RING_F_SP_ENQ | RING_F_SC_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_bulk, + .felem = test_ring_enqueue_zc_bulk_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_bulk, + .felem = test_ring_dequeue_zc_bulk_elem, + }, + }, + { + .desc = "MP_HTS/MC_HTS sync mode (ZC)", + .api_type = TEST_RING_ELEM_BULK | TEST_RING_THREAD_DEF, + .create_flags = RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_bulk, + .felem = test_ring_enqueue_zc_bulk_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_bulk, + .felem = test_ring_dequeue_zc_bulk_elem, + }, + }, + { + .desc = "SP/SC sync mode (ZC)", + .api_type = TEST_RING_ELEM_BURST | TEST_RING_THREAD_SPSC, + .create_flags = RING_F_SP_ENQ | RING_F_SC_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_burst, + .felem = test_ring_enqueue_zc_burst_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_burst, + .felem = test_ring_dequeue_zc_burst_elem, + }, + }, + { + .desc = "MP_HTS/MC_HTS sync mode (ZC)", + .api_type = TEST_RING_ELEM_BURST | TEST_RING_THREAD_DEF, + .create_flags = RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ, + .enq = { + .flegacy = test_ring_enqueue_zc_burst, + .felem = test_ring_enqueue_zc_burst_elem, + }, + .deq = { + .flegacy = test_ring_dequeue_zc_burst, + .felem = test_ring_dequeue_zc_burst_elem, + }, + } }; static unsigned int diff --git a/app/test/test_ring.h b/app/test/test_ring.h index b44711398..b525abb79 100644 --- a/app/test/test_ring.h +++ b/app/test/test_ring.h @@ -55,6 +55,48 @@ test_ring_inc_ptr(void *obj, int esize, unsigned int n) return (void *)((uint32_t *)obj + (n * sz / sizeof(uint32_t))); } +static inline void +test_ring_mem_copy(void *dst, void * const *src, int esize, unsigned int num) +{ + size_t sz; + + sz = num * sizeof(void *); + if (esize != -1) + sz = esize * num; + + memcpy(dst, src, sz); +} + +/* Copy to the ring memory */ +static inline void +test_ring_copy_to(struct rte_ring_zc_data *zcd, void * const *src, int esize, + unsigned int num) +{ + test_ring_mem_copy(zcd->ptr1, src, esize, zcd->n1); + if (zcd->n1 != num) { + if (esize == -1) + src = src + zcd->n1; + else + src = (void * const *)((const uint32_t *)src + + (zcd->n1 * esize / sizeof(uint32_t))); + test_ring_mem_copy(zcd->ptr2, src, + esize, num - zcd->n1); + } +} + +/* Copy from the ring memory */ +static inline void +test_ring_copy_from(struct rte_ring_zc_data *zcd, void *dst, int esize, + unsigned int num) +{ + test_ring_mem_copy(dst, zcd->ptr1, esize, zcd->n1); + + if (zcd->n1 != num) { + dst = test_ring_inc_ptr(dst, esize, zcd->n1); + test_ring_mem_copy(dst, zcd->ptr2, esize, num - zcd->n1); + } +} + static __rte_always_inline unsigned int test_ring_enqueue(struct rte_ring *r, void **obj, int esize, unsigned int n, unsigned int api_type)