From patchwork Tue Oct 28 19:43:54 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 39698 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ee0-f71.google.com (mail-ee0-f71.google.com [74.125.83.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3C08324026 for ; Tue, 28 Oct 2014 19:45:01 +0000 (UTC) Received: by mail-ee0-f71.google.com with SMTP id e51sf1108080eek.6 for ; Tue, 28 Oct 2014 12:45:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:subject :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=qOmQmbZAvqoC9bts5LDzz78QxNG2VWES1Ss8Js2VnZc=; b=TQTzlnNRJVWEvLfEa1JnX+LX/IfzN1xWEBMquWCqpUytAkSTSJFTgjR0ij6D6ZWdBJ QKrGNUPF0bD4CHdwk5Jb+/sMX16aNcD/dO4zjUwW8pnBR0teQcRmKM3kaeyBUv6Exb+L UcNQEnQWs7QSd9kHpmHKrNRCbYYMn9HlGUvK2rd3MyIoM6pUnW+DIdArACZVe8ec4/bn 9uYvrLl+MNjL8wL2f3fPnTYQwLvpMfYxkf9K+adwJvYKspVBllxpVbkFKrsqdoTvFGoy TbPX9hTi3fnqlVW9zUog+rhxwMr867ba2dAI1paIQMsRD626wb50pxgI3EB3BPg6xKfE G3XQ== X-Gm-Message-State: ALoCoQl3CJWSt2KXH974neQ7UZaAE44Ak3v+43OWs82DtK9OMEcaxlF3yQSDJnl+97Ls/+vX8okR X-Received: by 10.194.91.208 with SMTP id cg16mr19133wjb.5.1414525500367; Tue, 28 Oct 2014 12:45:00 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.19.5 with SMTP id a5ls108368lae.19.gmail; Tue, 28 Oct 2014 12:45:00 -0700 (PDT) X-Received: by 10.152.28.134 with SMTP id b6mr6638518lah.12.1414525500173; Tue, 28 Oct 2014 12:45:00 -0700 (PDT) Received: from mail-lb0-f177.google.com (mail-lb0-f177.google.com. [209.85.217.177]) by mx.google.com with ESMTPS id tl10si4024960lbb.131.2014.10.28.12.45.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Oct 2014 12:45:00 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) client-ip=209.85.217.177; Received: by mail-lb0-f177.google.com with SMTP id 10so1249755lbg.22 for ; Tue, 28 Oct 2014 12:45:00 -0700 (PDT) X-Received: by 10.112.189.10 with SMTP id ge10mr6318452lbc.23.1414525500034; Tue, 28 Oct 2014 12:45:00 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp545069lbz; Tue, 28 Oct 2014 12:44:58 -0700 (PDT) X-Received: by 10.224.54.205 with SMTP id r13mr8092903qag.73.1414525498184; Tue, 28 Oct 2014 12:44:58 -0700 (PDT) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id q7si2111908qcz.47.2014.10.28.12.44.29 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 28 Oct 2014 12:44:58 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XjCgu-0000K3-18; Tue, 28 Oct 2014 19:44:28 +0000 Received: from mail-lb0-f179.google.com ([209.85.217.179]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XjCgm-0000Jy-Tv for lng-odp@lists.linaro.org; Tue, 28 Oct 2014 19:44:21 +0000 Received: by mail-lb0-f179.google.com with SMTP id w7so1269965lbi.10 for ; Tue, 28 Oct 2014 12:44:15 -0700 (PDT) X-Received: by 10.152.203.139 with SMTP id kq11mr6418263lac.63.1414525454944; Tue, 28 Oct 2014 12:44:14 -0700 (PDT) Received: from localhost.localdomain (ppp91-76-163-205.pppoe.mtu-net.ru. [91.76.163.205]) by mx.google.com with ESMTPSA id f6sm981623lbh.10.2014.10.28.12.44.12 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Oct 2014 12:44:13 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Tue, 28 Oct 2014 22:43:54 +0300 Message-Id: <1414525434-10402-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.8.5.1.163.gd7aced9 X-Topics: Architecture patch Subject: [lng-odp] [ARCH PATCHv2] ipc design and usage modes X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Signed-off-by: Maxim Uvarov --- v2: fixed according to Mikes comments. ipc.dox | 228 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 228 insertions(+) create mode 100644 ipc.dox diff --git a/ipc.dox b/ipc.dox new file mode 100644 index 0000000..fd8e71d --- /dev/null +++ b/ipc.dox @@ -0,0 +1,228 @@ +/* Copyright (c) 2014, Linaro Limited + * All rights reserved + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** +@page ipc_design Inter Process Communication (IPC) API + +@tableofcontents + +@section ipc_intro Introduction + This document defines the two different ODP application modes + multithreading and multiprocessing with respect to their impact on IPC + +@subsection odp_modes Application Thread/Process modes: + ODP applications can use following programming models for multi core support: + -# Single application with ODP worker Threads. + -# Multi process application with single packet I/O pool and common initialization. + -# Different processed communicated thought IPC API. + +@todo - add diagram about IPC modes. + +@subsubsection odp_mode_threads Thread mode + The initialization sequence for thread mode is following: + +@verbatim + main() { + /* Init ODP before calling anything else. */ + odp_init_global(NULL, NULL); + + /* Init this thread. */ + odp_init_local(); + + /* Allocate memory for packets pool. That memory will be visible for all threads.*/ + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + pool_base = odp_shm_addr(shm); + + /* Create pool instance with reserved shm. */ + pool = odp_buffer_pool_create("packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + /* Create worker threads. */ + odph_linux_pthread_create(&thread_tbl[i], 1, core, thr_run_func, + &args); + } + + /* thread function */ + thr_run_func () { + /* Lookup the packet pool */ + pkt_pool = odp_buffer_pool_lookup("packet_pool"); + + /* Open a packet IO instance for this thread */ + pktio = odp_pktio_open("eth0", pkt_pool); + + for (;;) { + /* read buffer */ + buf = odp_schedule(NULL, ODP_SCHED_WAIT); + ... do something ... + } + } +@endverbatim + +@subsubsection odp_mode_processes Processes mode with shared memory + Initialization sequence in processes mode with shared memory is following: + +@verbatim + main() { + /* Init ODP before calling anything else. In process mode odp_init_global + * function called only once in main run process. + */ + odp_init_global(NULL, NULL); + + /* Init this thread. */ + odp_init_local(); + + /* Allocate memory for packets pool. That memory will be visible for all threads.*/ + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + pool_base = odp_shm_addr(shm); + + /* Create pool instance with reserved shm. */ + pool = odp_buffer_pool_create("packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + /* Call odph_linux_process_fork_n which will fork() current process to + * different processes. + */ + odph_linux_process_fork_n(proc, num_workers, first_core); + + /* Run same function as thread uses */ + thr_run_func(); + } + + /* thread function */ + thr_run_func () { + /* Lookup the packet pool */ + pkt_pool = odp_buffer_pool_lookup("packet_pool"); + + /* Open a packet IO instance for this thread */ + pktio = odp_pktio_open("eth0", pkt_pool); + + for (;;) { + /* read buffer */ + buf = odp_schedule(NULL, ODP_SCHED_WAIT); + ... do something ... + } + } +@endverbatim + +@subsubsection odp_mode_sep_processes Separate Processes mode + This mode differs from mode with common shared memory. Each execution thread is completely independent process which calls + odp_init_global() and do other initialization process then opens IPC pktio interface and does packets exchange between processes + to communicate between these independent processes. IPC pktio interface may be used to exchange packets. + For the base implementation (linux-generic) shared memory is used as the IPC mechanism to make it easy to reuse for different use + cases. The use cases are process that may be spread amongst different VMs, bare metal or regular Linux user space, in fact any + process that can share memory. + + In hardware implementations IPC pktio can be offloaded to HW SoC packets functions. + The initialization sequence in the separate thread mode model is same as it is process, both using shared memory but with following + difference: + +@subsubsection odp_mode_sep_processes_cons Separate Processes Sender (linux-generic) + -# Each process calls odp_init_global(), pool creation and etc. + + -# ODP_SHM_PROC flag provided to be able to map that memory from different process. + +@verbatim + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_SHM_PROC); + + pool_base = odp_shm_addr(shm); +@endverbatim + + -# Worker thread (or process) creates IPC pktio, and sends buffers to it: + + A) + odp_pktio_t ipc_pktio = odp_pktio_open("ipc_pktio", 0); + odp_pktio_send(ipc_pktio, pkt_tbl, pkts); + + B) instead of using packet io queue can be used in following way: + +@verbatim + odp_queue_t ipcq = odp_pktio_outq_getdef(ipc_pktio); + /* Then enqueue the packet for output queue */ + odp_queue_enq(ipcq, buf); +@endverbatim + +@subsubsection odp_mode_sep_processes_recv Separate Processes Receiver (linux-generic) + On the other end process also creates IPC packet I/O and receives packets + from it. + +@verbatim + /* Create packet pool visible by only second process. We will copy + * packets to that queue from IPC shared memory. + */ + shm = odp_shm_reserve("local_packet_pool", + SHM_PKT_POOL_SIZE, ODP_CACHE_LINE_SIZE, 0); + + pool_base = odp_shm_addr(shm); + pool = odp_buffer_pool_create("ipc_packet_pool", pool_base, + SHM_PKT_POOL_SIZE, + SHM_PKT_POOL_BUF_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_BUFFER_TYPE_PACKET); + + pool_base = NULL; + /* Loop to find remote shared pool */ + while (1) { + shm = odp_shm_reserve("shm_packet_pool", + SHM_PKT_POOL_SIZE, + ODP_CACHE_LINE_SIZE, + ODP_SHM_PROC_NOCREAT); <- ODP_SHM_PROC_NOCREAT flag provided to + not create shared memory object, do only lookup. + pool_base = odp_shm_addr(shm); + if (pool_base != NULL) { + break; + } else { + ODP_DBG("looking up for shm_packet_pool\n"); + sleep(1); + } + } + + + /* Do lookup packet I/O in IPC shared memory, + * and link it to local pool. */ + while (1) { + pktio = odp_pktio_lookup("ipc_pktio", pool, pool_base); + if (pktio == ODP_PKTIO_INVALID) { + sleep(1); + printf("pid %d: looking for ipc_pktio\n", getpid()); + continue; + } + break; + } + + /* Get packets from the IPC */ + for (;;) { + pkts = odp_pktio_recv(pktio, pkt_tbl, MAX_PKT_BURST); + ... + } +@endverbatim + +@subsubsection odp_mode_sep_processes_hw Separate Processes Hardware optimized + Hardware SoC implementation of IPC exchange can differ. It can use a shared pool + or it can rely on the hardware for packet transmission. But the API interface remains the + + + Hardware SoC implementation of IPC exchange can differ. It can use share pool + or can relay on hardware for packet transmission. But the API interface remains the + same: + + odp_pktio_open(), odp_pktio_lookup() + +@todo - Bug 825 odp_buffer_pool_create() API will change to allocate memory for pool inside it. + So odp_shm_reserve() for remote pool memory and odp_pktio_lookup() can go inside + odp_buffer_pool_create(). + +*/