Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3418597ybi; Tue, 2 Jul 2019 07:26:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqzx1LLspn1rvNMMH7VOi6fmQWwxoP8KVwSwPkoBNnBAU7QQ0cNPRr1EG7GpUR/dfiHnZ6NB X-Received: by 2002:a17:902:a60d:: with SMTP id u13mr27726745plq.144.1562077609375; Tue, 02 Jul 2019 07:26:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562077609; cv=none; d=google.com; s=arc-20160816; b=I9EzhqPdVESws6I2G7aS0YS4nD5BGXMnWGsZ02oxcP+YDEUor59mKg/szevVQVez1q N1LxHy+wtSaTh6x/FeGZa8VtLMlja0rnOVJIWE4u29S5tjcK5lisPKBa2aYoo6DjdlOR EJVuRfBTSp+3QV9TiQHqngH7JRuUT8Vb5Hida6zIorZUByb9C89alyCDdel8PWNj6Zwg b8meZxNJaMaZtyIi0Py4NEsu0vgpxN4B+B0nn9nIlOyxYRs78R0ZFyAn5B64hlmkLEFT eWOV/52NNUqvbv4IRvZO/TpkVCtG5j+4Xgc4NSKV3+JSlWPxECtwD9vdwptkjWXkhzzn kr1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:mail-followup-to :message-id:subject:cc:to:from:date:dkim-signature; bh=bSHjqClvRfHK2U9qiOj5LIMtdVJcukiMxcH2n03yaCU=; b=iwWGx/NtR43536ZSVkSgcmAGfSyUu8sVlbZ0gydOCFgib+VEYyd/5MBZ0Ifoejf+kz Wy7pcOism7lf2eqGb4CiogapJSd1QuKiXUdj0uGBBrlSx5FMCMI41vZD6jbsORBRPvRS Ts7UGaAA42FRKMyk2SqBSAS1/L56Kn5Kkl0pFs9ivQgiIoQIrMuW+XNo0nkafBzCujTA GnVUo3ZfKquiNUIvvmLCv7yIyma6Tfb/nBwooiODXav3orKrW3SrKBHoqdPIatvaqgw0 ZLOnT4UDFPfNuz/aRVeksOdbGqAGK18ZtlN3IdxSx9AynsavuqUi/sucG7h1dKGSwIsu SikQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HFTZp006; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v27si13075939pgn.14.2019.07.02.07.26.34; Tue, 02 Jul 2019 07:26:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HFTZp006; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727102AbfGBOYw (ORCPT + 99 others); Tue, 2 Jul 2019 10:24:52 -0400 Received: from mail-lj1-f196.google.com ([209.85.208.196]:41735 "EHLO mail-lj1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726936AbfGBOYv (ORCPT ); Tue, 2 Jul 2019 10:24:51 -0400 Received: by mail-lj1-f196.google.com with SMTP id 205so17100352ljj.8 for ; Tue, 02 Jul 2019 07:24:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to:user-agent; bh=bSHjqClvRfHK2U9qiOj5LIMtdVJcukiMxcH2n03yaCU=; b=HFTZp006q8TZVX5sIuJIMOkk1C9wJQO4rJGRAiNBk/2WlyfUJ1rdVQY+PYz0ZhQz3Q wT17e2X0kch1+uP8KvE/V1ET2jlNw8aK4cHsN9Zf832n78lQJQUY75gbrep4TC2UbVpA axaAvW+VgyPKfoktsN3yY2BkGI/5aoxuUHGyQG0WSr2OOvsF8q/VePxkmkaj/PJRO9vE rGWXVn0dbCitwyb/iSKcxQgRgC7wf16i4cJRuO/qWn8T4MKiDjladJV35U+7gNDPkgpt /fHPgF/QGWi5AyHp75MflzSGf05Yw5Z8bbdVS03YBHG/wVa/f2iIl+T0k5Bb5yqMYZdq I35A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=bSHjqClvRfHK2U9qiOj5LIMtdVJcukiMxcH2n03yaCU=; b=IzzFaG6en1Lj0pOe/4h92RXvLo5gwlZ2bg5kBfPFDESyueBFgdXTs2zp+S5FVyWwHD do8kTH4EywE7DTdMOZeRh4uUCaBPscouYU6X15B0V09Gt1XW8LbY+Gk3SqZTw8BGPl3i OWKZWWT1P5wgu6lnc0Hhv1GD8RsJRpc/zkkfLepXM12ulKbhDuQlIxIWOpnVykin4XZD aDYBKHYnxM5sEDI+4Q5gpwFU2hsnivo8IQ9f4owgvsh53X+NVMsoW9Caq6qmy7hNKvvM BlpHxthAOw2gut5DrLTr0A7kr9OhIlE+6X5rjd3n/c3VTpm/ymcv0KPbKLnbNh8Bnpq7 oGLQ== X-Gm-Message-State: APjAAAX2q1j0CPISP0ODlr8DjlffbY9hHNFILSNkQCnFotQgng9UB9fK xgzhUlSBQi9EkbmZFBIyyUYQyQ== X-Received: by 2002:a2e:9a96:: with SMTP id p22mr17345327lji.57.1562077488680; Tue, 02 Jul 2019 07:24:48 -0700 (PDT) Received: from khorivan (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id w28sm3817013ljd.12.2019.07.02.07.24.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 02 Jul 2019 07:24:48 -0700 (PDT) Date: Tue, 2 Jul 2019 17:24:46 +0300 From: Ivan Khoronzhuk To: Jesper Dangaard Brouer Cc: grygorii.strashko@ti.com, davem@davemloft.net, ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com Subject: Re: [PATCH v5 net-next 6/6] net: ethernet: ti: cpsw: add XDP support Message-ID: <20190702142444.GC4510@khorivan> Mail-Followup-To: Jesper Dangaard Brouer , grygorii.strashko@ti.com, davem@davemloft.net, ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com References: <20190630172348.5692-1-ivan.khoronzhuk@linaro.org> <20190630172348.5692-7-ivan.khoronzhuk@linaro.org> <20190701181901.150c0b71@carbon> <20190702113738.GB4510@khorivan> <20190702153902.0e42b0b2@carbon> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20190702153902.0e42b0b2@carbon> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 02, 2019 at 03:39:02PM +0200, Jesper Dangaard Brouer wrote: >On Tue, 2 Jul 2019 14:37:39 +0300 >Ivan Khoronzhuk wrote: > >> On Mon, Jul 01, 2019 at 06:19:01PM +0200, Jesper Dangaard Brouer wrote: >> >On Sun, 30 Jun 2019 20:23:48 +0300 >> >Ivan Khoronzhuk wrote: >> > >> >> +static int cpsw_ndev_create_xdp_rxq(struct cpsw_priv *priv, int ch) >> >> +{ >> >> + struct cpsw_common *cpsw = priv->cpsw; >> >> + int ret, new_pool = false; >> >> + struct xdp_rxq_info *rxq; >> >> + >> >> + rxq = &priv->xdp_rxq[ch]; >> >> + >> >> + ret = xdp_rxq_info_reg(rxq, priv->ndev, ch); >> >> + if (ret) >> >> + return ret; >> >> + >> >> + if (!cpsw->page_pool[ch]) { >> >> + ret = cpsw_create_rx_pool(cpsw, ch); >> >> + if (ret) >> >> + goto err_rxq; >> >> + >> >> + new_pool = true; >> >> + } >> >> + >> >> + ret = xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, >> >> + cpsw->page_pool[ch]); >> >> + if (!ret) >> >> + return 0; >> >> + >> >> + if (new_pool) { >> >> + page_pool_free(cpsw->page_pool[ch]); >> >> + cpsw->page_pool[ch] = NULL; >> >> + } >> >> + >> >> +err_rxq: >> >> + xdp_rxq_info_unreg(rxq); >> >> + return ret; >> >> +} >> > >> >Looking at this, and Ilias'es XDP-netsec error handling path, it might >> >be a mistake that I removed page_pool_destroy() and instead put the >> >responsibility on xdp_rxq_info_unreg(). >> >> As for me this is started not from page_pool_free, but rather from calling >> unreg_mem_model from rxq_info_unreg. Then, if page_pool_free is hidden >> it looks more a while normal to move all chain to be self destroyed. >> >> > >> >As here, we have to detect if page_pool_create() was a success, and then >> >if xdp_rxq_info_reg_mem_model() was a failure, explicitly call >> >page_pool_free() because the xdp_rxq_info_unreg() call cannot "free" >> >the page_pool object given it was not registered. >> >> Yes, it looked a little bit ugly from the beginning, but, frankly, >> I have got used to this already. >> >> > >> >Ivan's patch in[1], might be a better approach, which forced all >> >drivers to explicitly call page_pool_free(), even-though it just >> >dec-refcnt and the real call to page_pool_free() happened via >> >xdp_rxq_info_unreg(). >> > >> >To better handle error path, I would re-introduce page_pool_destroy(), >> >> So, you might to do it later as I understand, and not for my special >> case but becouse it makes error path to look a little bit more pretty. >> I'm perfectly fine with this, and better you add this, for now my >> implementation requires only "xdp: allow same allocator usage" patch, >> but if you insist I can resend also patch in question afterwards my >> series is applied (with modification to cpsw & netsec & mlx5 & page_pool). >> >> What's your choice? I can add to your series patch needed for cpsw to >> avoid some misuse. > >I will try to create a cleaned-up version of your patch[1] and >re-introduce page_pool_destroy() for drivers to use, then we can build >your driver on top of that. I've corrected patch to xdp core and tested. The "page pool API" change seems is orthogonal now. So no limits to send v6 that is actually done and no more strict dependency on page pool API changes whenever that can happen. -- Regards, Ivan Khoronzhuk