Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp2278440pxb; Mon, 19 Apr 2021 01:35:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzyGKthkXCq2vsqLL3L21qZBWiSGXfjeFKgH0XpiQ8ijX9O3LUOr+VMGHVpxcbPM1+QiuYd X-Received: by 2002:a17:90a:9a2:: with SMTP id 31mr23600537pjo.206.1618821333500; Mon, 19 Apr 2021 01:35:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618821333; cv=none; d=google.com; s=arc-20160816; b=PDX+R13p/Sy3ZlmDSbOD9opzt1Nlhnuc7IPaumRDOvAjVg5MU8UnaJxLU9zxsoX948 c4AfGApuzOGq2CgMODTtDIa6dTAFF+o8dSOvJVZpfzVqV7no4txaXKyg1IOI/aCd51+y bkOHQg2GZMKJGh1d+KS7DknL6F4kSpGT5faAKMe1CtFVJQXj7GP0Ay7zNNra6lppaOQ4 OtTTneXIHcQfBw/PLjJ4UGHNDWsKGq4/Ir8r+ZQGGY/w/2fNp77GWMDRWvsDl6eQbLbF AmA8PeI8ShV+gjbuwNIBe5IRefqtXL2+QziuIAX4qf1lEJ6W+c8/29MT4G54jHKKqQ0D DCPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=IzyCXfCBEoKjwZTj3+UC/XZ4jQ3FZEFaAFsdhiW2+eg=; b=lRghfzKg86qegc9o/7XfhXaRMyc1vGtqvwNZmbHXukV/2M/seZGACG/sOWJ4R5KBHZ 5xCkT8VhsAJG9XSikzDnWKawGmN57PocWtasEnHr9gDNMr2flikPlcxSCWK/KxDsA01s 6ddddOgU5IjvJRhNgXZGNkaJykO6Uo2//MZJ77U00HNtqsfX6HHzmDkbj5HY1m8vhWKq oevzijM4Fdxfy3Ke5pc6zW+0QC0ixFfynvDGCjKCwUGSzytoFeK0xV60krN8APtVNM16 3BBmGYiBDX/Y3QQytoYRD6e4mVULlxk7KcVklSJnJQVQOnSzi+mXlI6LyXdQ2WlCatK/ WReA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u10si11898968plz.247.2021.04.19.01.35.20; Mon, 19 Apr 2021 01:35:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234392AbhDSIUw (ORCPT + 99 others); Mon, 19 Apr 2021 04:20:52 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:17013 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbhDSIUv (ORCPT ); Mon, 19 Apr 2021 04:20:51 -0400 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FP06V2WH9zPs0N; Mon, 19 Apr 2021 16:17:22 +0800 (CST) Received: from [10.174.178.5] (10.174.178.5) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Mon, 19 Apr 2021 16:20:15 +0800 Subject: Re: [PATCH v2 1/5] mm/swapfile: add percpu_ref support for swap To: "Huang, Ying" CC: , , , , , , , , , , , , References: <20210417094039.51711-1-linmiaohe@huawei.com> <20210417094039.51711-2-linmiaohe@huawei.com> <87eef7kmzw.fsf@yhuang6-desk1.ccr.corp.intel.com> <753f414f-34a1-b16a-f826-7deb2dcd4af6@huawei.com> <87czuq4uo2.fsf@yhuang6-desk1.ccr.corp.intel.com> <87zgxu3e4v.fsf@yhuang6-desk1.ccr.corp.intel.com> From: Miaohe Lin Message-ID: <84daf06a-84b4-0523-b278-123e546a92e2@huawei.com> Date: Mon, 19 Apr 2021 16:20:15 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <87zgxu3e4v.fsf@yhuang6-desk1.ccr.corp.intel.com> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.5] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/4/19 15:52, Huang, Ying wrote: > Miaohe Lin writes: > >> On 2021/4/19 15:09, Huang, Ying wrote: >>> Miaohe Lin writes: >>> >>>> On 2021/4/19 10:48, Huang, Ying wrote: >>>>> Miaohe Lin writes: >>>>> >>>>>> We will use percpu-refcount to serialize against concurrent swapoff. This >>>>>> patch adds the percpu_ref support for swap. >>>>>> >>>>>> Signed-off-by: Miaohe Lin >>>>>> --- >>>>>> include/linux/swap.h | 3 +++ >>>>>> mm/swapfile.c | 33 +++++++++++++++++++++++++++++---- >>>>>> 2 files changed, 32 insertions(+), 4 deletions(-) >>>>>> >>>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>>>> index 144727041e78..8be36eb58b7a 100644 >>>>>> --- a/include/linux/swap.h >>>>>> +++ b/include/linux/swap.h >>>>>> @@ -240,6 +240,7 @@ struct swap_cluster_list { >>>>>> * The in-memory structure used to track swap areas. >>>>>> */ >>>>>> struct swap_info_struct { >>>>>> + struct percpu_ref users; /* serialization against concurrent swapoff */ >>>>> >>>>> The comments aren't general enough. We use this to check whether the >>>>> swap device has been fully initialized, etc. May be something as below? >>>>> >>>>> /* indicate and keep swap device valid */ >>>> >>>> Looks good. >>>> >>>>> >>>>>> unsigned long flags; /* SWP_USED etc: see above */ >>>>>> signed short prio; /* swap priority of this type */ >>>>>> struct plist_node list; /* entry in swap_active_head */ >>>>>> @@ -260,6 +261,8 @@ struct swap_info_struct { >>>>>> struct block_device *bdev; /* swap device or bdev of swap file */ >>>>>> struct file *swap_file; /* seldom referenced */ >>>>>> unsigned int old_block_size; /* seldom referenced */ >>>>>> + bool ref_initialized; /* seldom referenced */ >>>>>> + struct completion comp; /* seldom referenced */ >>>>>> #ifdef CONFIG_FRONTSWAP >>>>>> unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ >>>>>> atomic_t frontswap_pages; /* frontswap pages in-use counter */ >>>>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>>>> index 149e77454e3c..66515a3a2824 100644 >>>>>> --- a/mm/swapfile.c >>>>>> +++ b/mm/swapfile.c >>>>>> @@ -39,6 +39,7 @@ >>>>>> #include >>>>>> #include >>>>>> #include >>>>>> +#include >>>>>> >>>>>> #include >>>>>> #include >>>>>> @@ -511,6 +512,14 @@ static void swap_discard_work(struct work_struct *work) >>>>>> spin_unlock(&si->lock); >>>>>> } >>>>>> >>>>>> +static void swap_users_ref_free(struct percpu_ref *ref) >>>>>> +{ >>>>>> + struct swap_info_struct *si; >>>>>> + >>>>>> + si = container_of(ref, struct swap_info_struct, users); >>>>>> + complete(&si->comp); >>>>>> +} >>>>>> + >>>>>> static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) >>>>>> { >>>>>> struct swap_cluster_info *ci = si->cluster_info; >>>>>> @@ -2500,7 +2509,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, >>>>>> * Guarantee swap_map, cluster_info, etc. fields are valid >>>>>> * between get/put_swap_device() if SWP_VALID bit is set >>>>>> */ >>>>>> - synchronize_rcu(); >>>>> >>>>> You cannot remove this without changing get/put_swap_device(). It's >>>>> better to squash at least PATCH 1-2. >>>> >>>> Will squash PATCH 1-2. Thanks. >>>> >>>>> >>>>>> + percpu_ref_resurrect(&p->users); >>>>>> spin_lock(&swap_lock); >>>>>> spin_lock(&p->lock); >>>>>> _enable_swap_info(p); >>>>>> @@ -2621,11 +2630,18 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) >>>>>> p->flags &= ~SWP_VALID; /* mark swap device as invalid */ >>>>>> spin_unlock(&p->lock); >>>>>> spin_unlock(&swap_lock); >>>>>> + >>>>>> + percpu_ref_kill(&p->users); >>>>>> /* >>>>>> - * wait for swap operations protected by get/put_swap_device() >>>>>> - * to complete >>>>>> + * We need synchronize_rcu() here to protect the accessing >>>>>> + * to the swap cache data structure. >>>>>> */ >>>>>> synchronize_rcu(); >>>>>> + /* >>>>>> + * Wait for swap operations protected by get/put_swap_device() >>>>>> + * to complete. >>>>>> + */ >>>>> >>>>> I think the comments (after some revision) can be moved before >>>>> percpu_ref_kill(). The synchronize_rcu() comments can be merged. >>>>> >>>> >>>> Ok. >>>> >>>>>> + wait_for_completion(&p->comp); >>>>>> >>>>>> flush_work(&p->discard_work); >>>>>> >>>>>> @@ -3132,7 +3148,7 @@ static bool swap_discardable(struct swap_info_struct *si) >>>>>> SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) >>>>>> { >>>>>> struct swap_info_struct *p; >>>>>> - struct filename *name; >>>>>> + struct filename *name = NULL; >>>>>> struct file *swap_file = NULL; >>>>>> struct address_space *mapping; >>>>>> int prio; >>>>>> @@ -3163,6 +3179,15 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) >>>>>> >>>>>> INIT_WORK(&p->discard_work, swap_discard_work); >>>>>> >>>>>> + if (!p->ref_initialized) { >>>>> >>>>> I don't think it's necessary to add another flag p->ref_initialized. We >>>>> can distinguish newly allocated and reused swap_info_struct in alloc_swap_info(). >>>>> >>>> >>>> If newly allocated swap_info_struct failed to init percpu_ref, it will be considered as >>>> a reused one in alloc_swap_info() _but_ the field users of swap_info_struct is actually >>>> uninitialized. Does this make sense for you? >>> >>> We can call percpu_ref_init() just after kvzalloc() in alloc_swap_info(). >>> >> >> Yes, we can do it this way. But using ref_initialized might make the code more straightforward >> and simple? > > I think that it's simpler to call percpu_ref_init() in > alloc_swap_info(). We can just call percpu_ref_init() for allocated > swap_info_struct blindly, and call percpu_ref_exit() if we reuse. > Looks good. Will do. Many thanks again. > Best Regards, > Huang, Ying > >>> Best Regards, >>> Huang, Ying >>> >>>> Many Thanks for quick review. >>>> >>>>> Best Regards, >>>>> Huang, Ying >>>>> >>>>>> + error = percpu_ref_init(&p->users, swap_users_ref_free, >>>>>> + PERCPU_REF_INIT_DEAD, GFP_KERNEL); >>>>>> + if (unlikely(error)) >>>>>> + goto bad_swap; >>>>>> + init_completion(&p->comp); >>>>>> + p->ref_initialized = true; >>>>>> + } >>>>>> + >>>>>> name = getname(specialfile); >>>>>> if (IS_ERR(name)) { >>>>>> error = PTR_ERR(name); >>>>> . >>>>> >>> . >>> > . >