Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp3383498pxb; Wed, 14 Apr 2021 04:25:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxSQuaqn0BcmauWzzmEzMnQ0R9hZ8VfFmbINUtXe0Y50A79QAKDt6VljmNHHOhJ6ZEhpe2i X-Received: by 2002:a17:906:ae84:: with SMTP id md4mr859136ejb.432.1618399505870; Wed, 14 Apr 2021 04:25:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618399505; cv=none; d=google.com; s=arc-20160816; b=E/5qoDtzIXQ7gCSE5qKfLOhopJI9UCdnTTxC950KnmWNBRGmg+4bV7k8HmA++AXIl1 5FaeaxE//0oe2Z/eqPrH1m+wIvLQ2stpfxgu2m0Do5v9kZ4J+4z23lCBeDCiLOm5RLKe /4/nPmmQKHaBz0nU0ROtyeSPbp+5bNbLTSq75HUhPgLDNSv287yMTNHM1OXhwg3xPfrK lKtokO2pYyVFf4Q6zOW8+xJDqZSinALVyzlvUD7X/CF2uCXqMfYZp2O45POcKAkoD6Xu iI8lSKSkp5S7lPYXsLhq3yWEmVXkTP3FZenRG5YHZ8cUqz+oHrilJshiqzDRZ0rx2/HT DvrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=IVgjnE7TSuPusZyw4GiAMZjEya8oS+BVEh8zNcF4PLc=; b=wgAVO4K74M88AKZTDTiS+ek/mIQNGPxR5RYccajUnzx9SQf1UmBVC5GSwWVyy9u3Xf 24TQEbX1kd4sXnQnVWRPEPNSCQrj91VG7En5LOAeCaEOPmJe0X0/NntZNImgIBYwY2Cw Kageo3eZcwj8gF9L9Z1oTHe6qNqfwI9Y02XVst9X3HiRaAo9HuYlEeqiwHuVU4OhPFri DGMLP++RV2aE1zGsmnWY4vNQ7Y+Ut3Gq6PxEFpHatDluU9+G707Vgw/mjYPB7UYht+8Z rLtvR1spXndwaKjhXZB/H8ugOXqNfPNb0nW8cDYT4JE86Bt2wR4MMUtbijdGS2Kas7xl FnOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f25si9627331edv.594.2021.04.14.04.24.42; Wed, 14 Apr 2021 04:25:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233226AbhDNB7B (ORCPT + 99 others); Tue, 13 Apr 2021 21:59:01 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:15673 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229579AbhDNB7A (ORCPT ); Tue, 13 Apr 2021 21:59:00 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FKltT5YLRzmdLP; Wed, 14 Apr 2021 09:55:45 +0800 (CST) Received: from [10.174.176.162] (10.174.176.162) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.498.0; Wed, 14 Apr 2021 09:58:33 +0800 Subject: Re: [PATCH 1/5] mm/swapfile: add percpu_ref support for swap To: "Huang, Ying" CC: , , , , , , , , , , , , References: <20210408130820.48233-1-linmiaohe@huawei.com> <20210408130820.48233-2-linmiaohe@huawei.com> <87fszww55d.fsf@yhuang6-desk1.ccr.corp.intel.com> <87zgy4ufr3.fsf@yhuang6-desk1.ccr.corp.intel.com> <46a51c49-2887-0c1a-bcf3-e1ebe9698ebf@huawei.com> <874kg9u0jo.fsf@yhuang6-desk1.ccr.corp.intel.com> From: Miaohe Lin Message-ID: <75e27441-7744-7a10-e709-c8cd00830099@huawei.com> Date: Wed, 14 Apr 2021 09:58:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <874kg9u0jo.fsf@yhuang6-desk1.ccr.corp.intel.com> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.176.162] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/4/14 9:17, Huang, Ying wrote: > Miaohe Lin writes: > >> On 2021/4/12 15:24, Huang, Ying wrote: >>> "Huang, Ying" writes: >>> >>>> Miaohe Lin writes: >>>> >>>>> We will use percpu-refcount to serialize against concurrent swapoff. This >>>>> patch adds the percpu_ref support for later fixup. >>>>> >>>>> Signed-off-by: Miaohe Lin >>>>> --- >>>>> include/linux/swap.h | 2 ++ >>>>> mm/swapfile.c | 25 ++++++++++++++++++++++--- >>>>> 2 files changed, 24 insertions(+), 3 deletions(-) >>>>> >>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>>> index 144727041e78..849ba5265c11 100644 >>>>> --- a/include/linux/swap.h >>>>> +++ b/include/linux/swap.h >>>>> @@ -240,6 +240,7 @@ struct swap_cluster_list { >>>>> * The in-memory structure used to track swap areas. >>>>> */ >>>>> struct swap_info_struct { >>>>> + struct percpu_ref users; /* serialization against concurrent swapoff */ >>>>> unsigned long flags; /* SWP_USED etc: see above */ >>>>> signed short prio; /* swap priority of this type */ >>>>> struct plist_node list; /* entry in swap_active_head */ >>>>> @@ -260,6 +261,7 @@ struct swap_info_struct { >>>>> struct block_device *bdev; /* swap device or bdev of swap file */ >>>>> struct file *swap_file; /* seldom referenced */ >>>>> unsigned int old_block_size; /* seldom referenced */ >>>>> + struct completion comp; /* seldom referenced */ >>>>> #ifdef CONFIG_FRONTSWAP >>>>> unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ >>>>> atomic_t frontswap_pages; /* frontswap pages in-use counter */ >>>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>>> index 149e77454e3c..724173cd7d0c 100644 >>>>> --- a/mm/swapfile.c >>>>> +++ b/mm/swapfile.c >>>>> @@ -39,6 +39,7 @@ >>>>> #include >>>>> #include >>>>> #include >>>>> +#include >>>>> >>>>> #include >>>>> #include >>>>> @@ -511,6 +512,15 @@ static void swap_discard_work(struct work_struct *work) >>>>> spin_unlock(&si->lock); >>>>> } >>>>> >>>>> +static void swap_users_ref_free(struct percpu_ref *ref) >>>>> +{ >>>>> + struct swap_info_struct *si; >>>>> + >>>>> + si = container_of(ref, struct swap_info_struct, users); >>>>> + complete(&si->comp); >>>>> + percpu_ref_exit(&si->users); >>>> >>>> Because percpu_ref_exit() is used, we cannot use percpu_ref_tryget() in >>>> get_swap_device(), better to add comments there. >>> >>> I just noticed that the comments of percpu_ref_tryget_live() says, >>> >>> * This function is safe to call as long as @ref is between init and exit. >>> >>> While we need to call get_swap_device() almost at any time, so it's >>> better to avoid to call percpu_ref_exit() at all. This will waste some >>> memory, but we need to follow the API definition to avoid potential >>> issues in the long term. >> >> I have to admit that I'am not really familiar with percpu_ref. So I read the >> implementation code of the percpu_ref and found percpu_ref_tryget_live() could >> be called after exit now. But you're right we need to follow the API definition >> to avoid potential issues in the long term. >> >>> >>> And we need to call percpu_ref_init() before insert the swap_info_struct >>> into the swap_info[]. >> >> If we remove the call to percpu_ref_exit(), we should not use percpu_ref_init() >> here because *percpu_ref->data is assumed to be NULL* in percpu_ref_init() while >> this is not the case as we do not call percpu_ref_exit(). Maybe percpu_ref_reinit() >> or percpu_ref_resurrect() will do the work. >> >> One more thing, how could I distinguish the killed percpu_ref from newly allocated one? >> It seems percpu_ref_is_dying is only safe to call when @ref is between init and exit. >> Maybe I could do this in alloc_swap_info()? > > Yes. In alloc_swap_info(), you can distinguish newly allocated and > reused swap_info_struct. > >>> >>>>> +} >>>>> + >>>>> static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) >>>>> { >>>>> struct swap_cluster_info *ci = si->cluster_info; >>>>> @@ -2500,7 +2510,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, >>>>> * Guarantee swap_map, cluster_info, etc. fields are valid >>>>> * between get/put_swap_device() if SWP_VALID bit is set >>>>> */ >>>>> - synchronize_rcu(); >>>>> + percpu_ref_reinit(&p->users); >>>> >>>> Although the effect is same, I think it's better to use >>>> percpu_ref_resurrect() here to improve code readability. >>> >>> Check the original commit description for commit eb085574a752 "mm, swap: >>> fix race between swapoff and some swap operations" and discussion email >>> thread as follows again, >>> >>> https://lore.kernel.org/linux-mm/20171219053650.GB7829@linux.vnet.ibm.com/ >>> >>> I found that the synchronize_rcu() here is to avoid to call smp_rmb() or >>> smp_load_acquire() in get_swap_device(). Now we will use >>> percpu_ref_tryget_live() in get_swap_device(), so we will need to add >>> the necessary memory barrier, or make sure percpu_ref_tryget_live() has >>> ACQUIRE semantics. Per my understanding, we need to change >>> percpu_ref_tryget_live() for that. >>> >> >> Do you mean the below scene is possible? >> >> cpu1 >> swapon() >> ... >> percpu_ref_init >> ... >> setup_swap_info >> /* smp_store_release() is inside percpu_ref_reinit */ >> percpu_ref_reinit > > spin_unlock() has RELEASE semantics already. > >> ... >> >> cpu2 >> get_swap_device() >> /* ignored smp_rmb() */ >> percpu_ref_tryget_live > > Some kind of ACQUIRE is required here to guarantee the refcount is > checked before fetching the other fields of swap_info_struct. I have > sent out a RFC patch to mailing list to discuss this. Many thanks. But We may still need to add a smp_rmb() in get_swap_device() in case we can't add ACQUIRE for refcount. > >> ... >> >> There is indeed missing smp_rmb() in percpu_ref_tryget_live. So I think the above >> scene possible and we should fix this. >> >>>>> spin_lock(&swap_lock); >>>>> spin_lock(&p->lock); >>>>> _enable_swap_info(p); >>>>> @@ -2621,11 +2631,13 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) >>>>> p->flags &= ~SWP_VALID; /* mark swap device as invalid */ >>>>> spin_unlock(&p->lock); >>>>> spin_unlock(&swap_lock); >>>>> + >>>>> + percpu_ref_kill(&p->users); >>>>> /* >>>>> * wait for swap operations protected by get/put_swap_device() >>>>> * to complete >>>>> */ >>>>> - synchronize_rcu(); >>>>> + wait_for_completion(&p->comp); >>>> >>>> Better to move percpu_ref_kill() after the comments. And maybe revise >>>> the comments. >>> >>> After reading the original commit description as above, I found that we >>> need synchronize_rcu() here to protect the accessing to the swap cache >>> data structure. Because there's call_rcu() during percpu_ref_kill(), it >>> appears OK to keep the synchronize_rcu() here. And we need to revise >>> the comments to make it clear what is protected by which operation. >>> >> >> Per my understanding, percpu_ref->data->release is called only after the refcnt >> reaches 0, including a full grace period has elapsed or refcnt won't be 0. >> wait_for_completion() is used for waiting the last refcnt being released. So >> synchronize_rcu() is not necessary here? > > Then we will depends on the implementation of percpu_ref. If it changed > its implementation, it may take long to find out we need to change the > code here. I guess in most cases, even adding a synchronize_rcu() here, > we still only need to wait for one grace period. So the overhead to > call synchronize_rcu() is low here. And the code is easier to be > maintained. > Sounds reasonable. Will do. Thanks. > Best Regards, > Huang, Ying > . >