Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp1394965ybb; Wed, 25 Mar 2020 23:14:24 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsB1y9x7MKOtaA0rMYdcD+dNvZaBrorIsYLLqTZS70cVV8hD6onFpSGTVPws703KTM8Zr8c X-Received: by 2002:aca:af12:: with SMTP id y18mr755533oie.78.1585203264065; Wed, 25 Mar 2020 23:14:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585203264; cv=none; d=google.com; s=arc-20160816; b=inOcuOnhq6nmBupqJQFix8j2HIjsuXWZEnD4o94J1vcT98Ec+WpI/9Mo+TBzLSeMgF 6+FOKoqqvNWCIpXLTc5XAzVCdOk1leSJIXCyjsJwG5eNHap+X2t2ThnEcNs+xh3bU73S 2HXrUXun3qk37/mh6d6dLBSM+CfpzQ6nFS+9MraCLxePVnuGMzrFmUooCTdw/Rpux1Vl 2iXYtZu/OKIifvS6va1sIZpwDKaHXpch14T1YerJqSHNCIagB4DK+ldUdxYoHefO2VvL b/lzKdvDTEV5A5MI2kLmq7FAYMdh5U50M45Nn/wqc0Rief0C9iSpR4OJt0h2UdxeadFT n5Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=gPcEMXRPvJbPRuzCholrxluQ5t3A7xCzgVaJ5THOoRY=; b=U5q4hvyvmgrPZDY7XaVhBKvFPQV2Fc8c98uZGa5jftUJmmpa/AP+s/Aw4EeEk0NHKT Lz3MV/xAtXl6k1jZiQeOZth4nxgFdsutIyrpUVbt5JMuxfM/kQ7xCCVANiB6S6E+yT1T j3Z6Uvl/7xhWFbMpUMgh6CadInSZcMvlLpwVwLSQcSeZ5ijz4KxfsaVmUiby8qWwN/Tp N0fCDa/BchoSY4CGUaJwUwujBrlfovG4IVxNRGe3I9vAyrC+6AclbsdhsQulNFbYrEAB xRVeUEQoddxTtCFrkyHtp/sfRJmbR32KyUUuP7rNQmv6ylPZ6V1PpqSHuP+oq8ryP+oQ a81w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=N9vzyOdw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y16si634322oie.225.2020.03.25.23.14.11; Wed, 25 Mar 2020 23:14:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=N9vzyOdw; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727639AbgCZGNk (ORCPT + 99 others); Thu, 26 Mar 2020 02:13:40 -0400 Received: from mail-il1-f194.google.com ([209.85.166.194]:44286 "EHLO mail-il1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725854AbgCZGNk (ORCPT ); Thu, 26 Mar 2020 02:13:40 -0400 Received: by mail-il1-f194.google.com with SMTP id j69so4269761ila.11; Wed, 25 Mar 2020 23:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=gPcEMXRPvJbPRuzCholrxluQ5t3A7xCzgVaJ5THOoRY=; b=N9vzyOdwXAXB5WPGxkQ8W5+wvd2JsUkiSHY5539u40WEOuUUrxRMhPUtb6kU66COUv qfbZqUxwjQmHwwjTRZ7WPqjhyUhuWKmOydrDoq80F7mT8OI2mEPPzzlK67GGPDogyZJ5 P4f67J9P0WKpdin9RGv6F56fc4aawSYbAUjRpIB+z0195YjoFz7P8TXIjyJ0O5HoQPZM 7+4rWoMTczd7EH0jvHZl+4DS677Jho2AuLwdBKjmpMWkYjFYd4fMPCV3z6128bhziuqV oDgO4vTaq/3O8uE3dMCzmsgul6IzADL74gO7P55pE51lahiWGHb6AwVPaIAA7HrH2g5q V8Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=gPcEMXRPvJbPRuzCholrxluQ5t3A7xCzgVaJ5THOoRY=; b=si0tZuDi0cGUvl/KlAo3JUnRPZDZHAMihyL06DIW344v174vcLw258OjnughKqObHk bAKZqT4qW68gsFNpf+6qNMAAwKwtfg6IGbMUHMEjNsbmJUFxS5YD34FvX3sYGOM4iPzl PP3vxZZSdno2HSsX9F3y2yKXmeZNbA2YQK8zb5k2yj9I10U3uNki0ixIqU7hVTzqYNaR sVfdVX9PMaxH5i7PazQvbnkHcy50LMNiHKIwcANv6UsN/SG/BFkvFez7MRh58IIXlfyy HuEmdbS/Zii7D71tX9OY+Lu6C0IQatwo19A3jvLW96cQgfpHyRUbzw6eDO9CVkzbyasq QtfA== X-Gm-Message-State: ANhLgQ3D9MU6cuZC/vtY/nlaRDAFMPJFtpDn0V3DzILcrXkD+UOrsT2u U7EQqTPrTZRhpNufftoxrkYdp0TkenCuPi4eimw= X-Received: by 2002:a92:5b56:: with SMTP id p83mr7258454ilb.70.1585203217463; Wed, 25 Mar 2020 23:13:37 -0700 (PDT) MIME-Version: 1.0 References: <20200322090425.6253-1-hqjagain@gmail.com> <20200326001416.GH3756@localhost.localdomain> <20200326032252.GI3756@localhost.localdomain> In-Reply-To: <20200326032252.GI3756@localhost.localdomain> From: Qiujun Huang Date: Thu, 26 Mar 2020 14:13:26 +0800 Message-ID: Subject: Re: [PATCH v4] sctp: fix refcount bug in sctp_wfree To: Marcelo Ricardo Leitner Cc: "David S. Miller" , vyasevich@gmail.com, nhorman@tuxdriver.com, Jakub Kicinski , linux-sctp@vger.kernel.org, netdev , LKML , anenbupt@gmail.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 26, 2020 at 11:22 AM Marcelo Ricardo Leitner wrote: > > On Thu, Mar 26, 2020 at 09:30:08AM +0800, Qiujun Huang wrote: > > On Thu, Mar 26, 2020 at 8:14 AM Marcelo Ricardo Leitner > > wrote: > > > > > > On Sun, Mar 22, 2020 at 05:04:25PM +0800, Qiujun Huang wrote: > > > > sctp_sock_migrate should iterate over the datamsgs to modify > > > > all trunks(skbs) to newsk. For this, out_msg_list is added to > > > > > > s/trunks/chunks/ > > > > My :p. > > > > > > > > > sctp_outq to maintain datamsgs list. > > > > > > It is an interesting approach. It speeds up the migration, yes, but it > > > will also use more memory per datamsg, for an operation that, when > > > performed, the socket is usually calm. > > > > > > It's also another list to be handled, and I'm not seeing the patch > > > here move the datamsg itself now to the new outq. It would need > > > something along these lines: > > > > Are all the rx chunks in the rx queues? > > Yes, even with GSO. > > > > > > sctp_sock_migrate() > > > { > > > ... > > > /* Move any messages in the old socket's receive queue that are for the > > > * peeled off association to the new socket's receive queue. > > > */ > > > sctp_skb_for_each(skb, &oldsk->sk_receive_queue, tmp) { > > > event = sctp_skb2event(skb); > > > ... > > > /* Walk through the pd_lobby, looking for skbs that > > > * need moved to the new socket. > > > */ > > > sctp_skb_for_each(skb, &oldsp->pd_lobby, tmp) { > > > event = sctp_skb2event(skb); > > > > > > That said, I don't think it's worth this new list. > > > > About this case: > > datamsg > > ->chunk0 chunk1 > > chunk2 > > queue ->transmitted ->retransmit > > ->not in any queue > > We always can find it through the other chunks, otherwise it's freed. > > > > > Also need to maintain a datamsg list to record which datamsg is > > processed avoiding repetitive > > processing. > > Right, but for that we can add a simple check on > sctp_for_each_tx_datamsg() based on a parameter. Great! I get it, thanks! > > > So, list it to outq. Maybe it will be used sometime. > > We can change it when the time comes. For now, if we can avoid growing > sctp_datamsg, it's better. With this patch, it grows from 40 to 56 > bytes, leaving just 8 left before it starts using a slab of 128 bytes > for it. > > > The patched list_for_each_entry() can/should be factored out into > __sctp_for_each_tx_datachunk, whose first parameter then is the queue > instead the asoc. > > ---8<--- > > diff --git a/net/sctp/socket.c b/net/sctp/socket.c > index fed26a1e9518..62f401799709 100644 > --- a/net/sctp/socket.c > +++ b/net/sctp/socket.c > @@ -148,19 +148,30 @@ static void sctp_clear_owner_w(struct sctp_chunk *chunk) > } > > static void sctp_for_each_tx_datachunk(struct sctp_association *asoc, > + bool clear, > void (*cb)(struct sctp_chunk *)) > > { > + struct sctp_datamsg *msg, *prev_msg = NULL; > struct sctp_outq *q = &asoc->outqueue; > + struct sctp_chunk *chunk, *c; > struct sctp_transport *t; > - struct sctp_chunk *chunk; > > list_for_each_entry(t, &asoc->peer.transport_addr_list, transports) > list_for_each_entry(chunk, &t->transmitted, transmitted_list) > cb(chunk); > > - list_for_each_entry(chunk, &q->retransmit, transmitted_list) > - cb(chunk); > + list_for_each_entry(chunk, &q->sacked, transmitted_list) { > + msg = chunk->msg; > + if (msg == prev_msg) > + continue; > + list_for_each_entry(c, &msg->chunks, frag_list) { > + if ((clear && asoc->base.sk == c->skb->sk) || > + (!clear && asoc->base.sk != c->skb->sk)) > + cb(c); > + } > + prev_msg = msg; > + } > > list_for_each_entry(chunk, &q->sacked, transmitted_list) > cb(chunk); > @@ -9574,9 +9585,9 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk, > * paths won't try to lock it and then oldsk. > */ > lock_sock_nested(newsk, SINGLE_DEPTH_NESTING); > - sctp_for_each_tx_datachunk(assoc, sctp_clear_owner_w); > + sctp_for_each_tx_datachunk(assoc, true, sctp_clear_owner_w); > sctp_assoc_migrate(assoc, newsk); > - sctp_for_each_tx_datachunk(assoc, sctp_set_owner_w); > + sctp_for_each_tx_datachunk(assoc, false, sctp_set_owner_w); > > /* If the association on the newsk is already closed before accept() > * is called, set RCV_SHUTDOWN flag.