Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp176579pxb; Fri, 16 Apr 2021 02:49:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwP5m67OLRY0L0eQZn79czmVncmIDwsDrKNhSWoQUvP63LODhkVfDffZYFMlvMRkGqVt10B X-Received: by 2002:a63:6c83:: with SMTP id h125mr7303087pgc.50.1618566560228; Fri, 16 Apr 2021 02:49:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618566560; cv=none; d=google.com; s=arc-20160816; b=vW7xMz9QFRzxCjCfoleb5DSDCwPN8He/aD6YheD/mY6s7Mchf5VdeifuCryQmvJbIp eY11F9BmA7u6/rvt4OKNplnJkqMks2aPxDeazPvHzUqZ17E4rtqPWK8f7s4YfV+9uvWV 6XspZFz+vBy3BfliIJxsieMkn6TMtB2BcQ7IOx4StVLF5A5v4am2EDw/fnobnwi1LyVa ZJy/NCIlxBuGTc0WF+ayr/u3qWFDgAfFyB3znVnJRnHX/TPpGpx0QQ5fmhykbsT80tlg 07GQ6nnmOdOda0ZdAuOgoytWLc6Lw7l0cbyGdi4yaazQcMiGncoCPymxAuDNEbpMUa/8 x5yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=nc3/pGMTQta7w0opL+tn2M4LG3bzpy7JIaL/fH8Queg=; b=RZ/H9gWobwV/gM6aT7/SN5XQS+07zPwAKEtEG/DX253xBEzV58R6J7F24kM/OOM8V6 EBovSmv3jta3f8nPgLfqbDf98pM6nrORnHS8D9QYnJiCVJiu8hEwSuX/PPGCxzLZK6UH bcXtwRt9lGRBA2gT5fDp7iC16iC05CihU2eDKzVXBMtgsPKESYo8gwTbRIDBOsuytFDk DWrZ2lkuUOntYIBNp7I7lGEXRJ5+MpuAyjy09iIetFK+jthUOMdYAIwNouHL9jL4CIEz HKDFN2iF34O1FM1b0LyLsmWl+Ul/grdgTYWNKstCVFbHuK+LsS3BlpP81p5/X30DVBOR 677Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y62si7296944pgy.167.2021.04.16.02.49.08; Fri, 16 Apr 2021 02:49:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239778AbhDPIbO (ORCPT + 99 others); Fri, 16 Apr 2021 04:31:14 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:17005 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229719AbhDPIbM (ORCPT ); Fri, 16 Apr 2021 04:31:12 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FM8Tw1VkpzPqVt; Fri, 16 Apr 2021 16:27:48 +0800 (CST) Received: from [10.174.176.162] (10.174.176.162) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.498.0; Fri, 16 Apr 2021 16:30:42 +0800 Subject: Re: [PATCH 1/5] mm/swapfile: add percpu_ref support for swap To: "Huang, Ying" CC: Dennis Zhou , , , , , , , , , , , , , References: <46a51c49-2887-0c1a-bcf3-e1ebe9698ebf@huawei.com> <874kg9u0jo.fsf@yhuang6-desk1.ccr.corp.intel.com> <75e27441-7744-7a10-e709-c8cd00830099@huawei.com> <87tuo9sjpj.fsf@yhuang6-desk1.ccr.corp.intel.com> <877dl5seig.fsf@yhuang6-desk1.ccr.corp.intel.com> <87zgy1qv1h.fsf@yhuang6-desk1.ccr.corp.intel.com> <87o8egp1bk.fsf@yhuang6-desk1.ccr.corp.intel.com> <32f4c6f4-c969-c434-82d6-66a0e76ac4c6@huawei.com> <8735vqoiec.fsf@yhuang6-desk1.ccr.corp.intel.com> From: Miaohe Lin Message-ID: <2a25f319-ccf2-8157-5f97-2c650cb83442@huawei.com> Date: Fri, 16 Apr 2021 16:30:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <8735vqoiec.fsf@yhuang6-desk1.ccr.corp.intel.com> Content-Type: text/plain; charset="UTF-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.176.162] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/4/16 14:25, Huang, Ying wrote: > Miaohe Lin writes: > >> On 2021/4/15 22:31, Dennis Zhou wrote: >>> On Thu, Apr 15, 2021 at 01:24:31PM +0800, Huang, Ying wrote: >>>> Dennis Zhou writes: >>>> >>>>> On Wed, Apr 14, 2021 at 01:44:58PM +0800, Huang, Ying wrote: >>>>>> Dennis Zhou writes: >>>>>> >>>>>>> On Wed, Apr 14, 2021 at 11:59:03AM +0800, Huang, Ying wrote: >>>>>>>> Dennis Zhou writes: >>>>>>>> >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> On Wed, Apr 14, 2021 at 10:06:48AM +0800, Huang, Ying wrote: >>>>>>>>>> Miaohe Lin writes: >>>>>>>>>> >>>>>>>>>>> On 2021/4/14 9:17, Huang, Ying wrote: >>>>>>>>>>>> Miaohe Lin writes: >>>>>>>>>>>> >>>>>>>>>>>>> On 2021/4/12 15:24, Huang, Ying wrote: >>>>>>>>>>>>>> "Huang, Ying" writes: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Miaohe Lin writes: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> We will use percpu-refcount to serialize against concurrent swapoff. This >>>>>>>>>>>>>>>> patch adds the percpu_ref support for later fixup. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Signed-off-by: Miaohe Lin >>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>> include/linux/swap.h | 2 ++ >>>>>>>>>>>>>>>> mm/swapfile.c | 25 ++++++++++++++++++++++--- >>>>>>>>>>>>>>>> 2 files changed, 24 insertions(+), 3 deletions(-) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>>>>>>>>>>>>>> index 144727041e78..849ba5265c11 100644 >>>>>>>>>>>>>>>> --- a/include/linux/swap.h >>>>>>>>>>>>>>>> +++ b/include/linux/swap.h >>>>>>>>>>>>>>>> @@ -240,6 +240,7 @@ struct swap_cluster_list { >>>>>>>>>>>>>>>> * The in-memory structure used to track swap areas. >>>>>>>>>>>>>>>> */ >>>>>>>>>>>>>>>> struct swap_info_struct { >>>>>>>>>>>>>>>> + struct percpu_ref users; /* serialization against concurrent swapoff */ >>>>>>>>>>>>>>>> unsigned long flags; /* SWP_USED etc: see above */ >>>>>>>>>>>>>>>> signed short prio; /* swap priority of this type */ >>>>>>>>>>>>>>>> struct plist_node list; /* entry in swap_active_head */ >>>>>>>>>>>>>>>> @@ -260,6 +261,7 @@ struct swap_info_struct { >>>>>>>>>>>>>>>> struct block_device *bdev; /* swap device or bdev of swap file */ >>>>>>>>>>>>>>>> struct file *swap_file; /* seldom referenced */ >>>>>>>>>>>>>>>> unsigned int old_block_size; /* seldom referenced */ >>>>>>>>>>>>>>>> + struct completion comp; /* seldom referenced */ >>>>>>>>>>>>>>>> #ifdef CONFIG_FRONTSWAP >>>>>>>>>>>>>>>> unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ >>>>>>>>>>>>>>>> atomic_t frontswap_pages; /* frontswap pages in-use counter */ >>>>>>>>>>>>>>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>>>>>>>>>>>>>> index 149e77454e3c..724173cd7d0c 100644 >>>>>>>>>>>>>>>> --- a/mm/swapfile.c >>>>>>>>>>>>>>>> +++ b/mm/swapfile.c >>>>>>>>>>>>>>>> @@ -39,6 +39,7 @@ >>>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>>> +#include >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>>> @@ -511,6 +512,15 @@ static void swap_discard_work(struct work_struct *work) >>>>>>>>>>>>>>>> spin_unlock(&si->lock); >>>>>>>>>>>>>>>> } >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> +static void swap_users_ref_free(struct percpu_ref *ref) >>>>>>>>>>>>>>>> +{ >>>>>>>>>>>>>>>> + struct swap_info_struct *si; >>>>>>>>>>>>>>>> + >>>>>>>>>>>>>>>> + si = container_of(ref, struct swap_info_struct, users); >>>>>>>>>>>>>>>> + complete(&si->comp); >>>>>>>>>>>>>>>> + percpu_ref_exit(&si->users); >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Because percpu_ref_exit() is used, we cannot use percpu_ref_tryget() in >>>>>>>>>>>>>>> get_swap_device(), better to add comments there. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I just noticed that the comments of percpu_ref_tryget_live() says, >>>>>>>>>>>>>> >>>>>>>>>>>>>> * This function is safe to call as long as @ref is between init and exit. >>>>>>>>>>>>>> >>>>>>>>>>>>>> While we need to call get_swap_device() almost at any time, so it's >>>>>>>>>>>>>> better to avoid to call percpu_ref_exit() at all. This will waste some >>>>>>>>>>>>>> memory, but we need to follow the API definition to avoid potential >>>>>>>>>>>>>> issues in the long term. >>>>>>>>>>>>> >>>>>>>>>>>>> I have to admit that I'am not really familiar with percpu_ref. So I read the >>>>>>>>>>>>> implementation code of the percpu_ref and found percpu_ref_tryget_live() could >>>>>>>>>>>>> be called after exit now. But you're right we need to follow the API definition >>>>>>>>>>>>> to avoid potential issues in the long term. >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> And we need to call percpu_ref_init() before insert the swap_info_struct >>>>>>>>>>>>>> into the swap_info[]. >>>>>>>>>>>>> >>>>>>>>>>>>> If we remove the call to percpu_ref_exit(), we should not use percpu_ref_init() >>>>>>>>>>>>> here because *percpu_ref->data is assumed to be NULL* in percpu_ref_init() while >>>>>>>>>>>>> this is not the case as we do not call percpu_ref_exit(). Maybe percpu_ref_reinit() >>>>>>>>>>>>> or percpu_ref_resurrect() will do the work. >>>>>>>>>>>>> >>>>>>>>>>>>> One more thing, how could I distinguish the killed percpu_ref from newly allocated one? >>>>>>>>>>>>> It seems percpu_ref_is_dying is only safe to call when @ref is between init and exit. >>>>>>>>>>>>> Maybe I could do this in alloc_swap_info()? >>>>>>>>>>>> >>>>>>>>>>>> Yes. In alloc_swap_info(), you can distinguish newly allocated and >>>>>>>>>>>> reused swap_info_struct. >>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>>> +} >>>>>>>>>>>>>>>> + >>>>>>>>>>>>>>>> static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) >>>>>>>>>>>>>>>> { >>>>>>>>>>>>>>>> struct swap_cluster_info *ci = si->cluster_info; >>>>>>>>>>>>>>>> @@ -2500,7 +2510,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, >>>>>>>>>>>>>>>> * Guarantee swap_map, cluster_info, etc. fields are valid >>>>>>>>>>>>>>>> * between get/put_swap_device() if SWP_VALID bit is set >>>>>>>>>>>>>>>> */ >>>>>>>>>>>>>>>> - synchronize_rcu(); >>>>>>>>>>>>>>>> + percpu_ref_reinit(&p->users); >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Although the effect is same, I think it's better to use >>>>>>>>>>>>>>> percpu_ref_resurrect() here to improve code readability. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Check the original commit description for commit eb085574a752 "mm, swap: >>>>>>>>>>>>>> fix race between swapoff and some swap operations" and discussion email >>>>>>>>>>>>>> thread as follows again, >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://lore.kernel.org/linux-mm/20171219053650.GB7829@linux.vnet.ibm.com/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> I found that the synchronize_rcu() here is to avoid to call smp_rmb() or >>>>>>>>>>>>>> smp_load_acquire() in get_swap_device(). Now we will use >>>>>>>>>>>>>> percpu_ref_tryget_live() in get_swap_device(), so we will need to add >>>>>>>>>>>>>> the necessary memory barrier, or make sure percpu_ref_tryget_live() has >>>>>>>>>>>>>> ACQUIRE semantics. Per my understanding, we need to change >>>>>>>>>>>>>> percpu_ref_tryget_live() for that. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Do you mean the below scene is possible? >>>>>>>>>>>>> >>>>>>>>>>>>> cpu1 >>>>>>>>>>>>> swapon() >>>>>>>>>>>>> ... >>>>>>>>>>>>> percpu_ref_init >>>>>>>>>>>>> ... >>>>>>>>>>>>> setup_swap_info >>>>>>>>>>>>> /* smp_store_release() is inside percpu_ref_reinit */ >>>>>>>>>>>>> percpu_ref_reinit >>>>>>>>>>>> >>>>>>>>>>>> spin_unlock() has RELEASE semantics already. >>>>>>>>>>>> >>>>>>>>>>>>> ... >>>>>>>>>>>>> >>>>>>>>>>>>> cpu2 >>>>>>>>>>>>> get_swap_device() >>>>>>>>>>>>> /* ignored smp_rmb() */ >>>>>>>>>>>>> percpu_ref_tryget_live >>>>>>>>>>>> >>>>>>>>>>>> Some kind of ACQUIRE is required here to guarantee the refcount is >>>>>>>>>>>> checked before fetching the other fields of swap_info_struct. I have >>>>>>>>>>>> sent out a RFC patch to mailing list to discuss this. >>>>>>>>> >>>>>>>>> I'm just catching up and following along a little bit. I apologize I >>>>>>>>> haven't read the swap code, but my understanding is you are trying to >>>>>>>>> narrow a race condition with swapoff. That makes sense to me. I'm not >>>>>>>>> sure I follow the need to race with reinitializing the ref though? Is it >>>>>>>>> not possible to wait out the dying swap info and then create a new one >>>>>>>>> rather than push acquire semantics? >>>>>>>> >>>>>>>> We want to check whether the swap entry is valid (that is, the swap >>>>>>>> device isn't swapped off now), prevent it from swapping off, then access >>>>>>>> the swap_info_struct data structure. When accessing swap_info_struct, >>>>>>>> we want to guarantee the ordering, so that we will not reference >>>>>>>> uninitialized fields of swap_info_struct. >>>>>>>> >>>>>>> >>>>>>> So in the normal context of percpu_ref, once someone can access it, the >>>>>>> elements that it is protecting are expected to be initialized. >>>>>> >>>>>> If we can make sure that all elements being initialized fully, why not >>>>>> just use percpu_ref_get() instead of percpu_ref_tryget*()? >>>>>> >>>>> >>>>> Generally, the lookup is protected with rcu and then >>>>> percpu_ref_tryget*() is used to obtain a reference. percpu_ref_get() is >>>>> only good if you already have a ref as it increments regardless of being >>>>> 0. >>>>> >>>>> What I mean is if you can get a ref, that means the object hasn't been >>>>> destroyed. This differs from the semantics you are looking for which I >>>>> understand to be: I have long lived pointers to objects. The object may >>>>> die, but I may resurrect it and I want the old pointers to still be >>>>> valid. >>>>> >>>>> When is it possible for someone to have a pointer to the swap device and >>>>> the refcount goes to 0? It might be better to avoid this situation than >>>>> add acquire semantics. >>>>> >>>>>>> In the basic case for swap off, I'm seeing the goal as to prevent >>>>>>> destruction until anyone currently accessing swap is done. In this >>>>>>> case wouldn't we always be protecting a live struct? >>>>>>> >>>>>>> I'm maybe not understanding what conditions you're trying to revive the >>>>>>> percpu_ref? >>>>>> >>>>>> A swap entry likes an indirect pointer to a swap device. We may hold a >>>>>> swap entry for long time, so that the swap device is swapoff/swapon. >>>>>> Then we need to make sure the swap device are fully initialized before >>>>>> accessing the swap device via the swap entry. >>>>>> >>>>> >>>>> So if I have some number of outstanding references, and then >>>>> percpu_ref_kill() is called, then only those that have the pointer will >>>>> be able to use the swap device as those references are still good. Prior >>>>> to calling percpu_ref_kill(), call_rcu() needs to be called on lookup >>>>> data structure. >>>>> >>>>> My personal understanding of tryget() vs tryget_live() is that it >>>>> provides a 2 phase clean up and bounds the ability for new users to come >>>>> in (cgroup destruction is a primary user). As tryget() might inevitably >>>>> let a cgroup live long past its removal, tryget_live() will say oh >>>>> you're in the process of dying do something else. >>>> >>>> OK. I think that I understand your typical use case now. The resource >>>> producer code may look like, >>>> >>>> obj = kmalloc(); >>>> /* Initialize obj fields */ >>>> percpu_ref_init(&obj->ref); >>>> rcu_assign_pointer(global_p, obj); >>>> >>>> The resource reclaimer looks like, >>>> >>>> p = global_p; >>>> global_p = NULL; >>>> percpu_ref_kill(&p->ref); >>>> /* wait until percpu_ref_is_zero(&p->ref) */ >>>> /* free resources pointed by obj fields */ >>>> kfree(p); >>>> >>>> The resource producer looks like, >>>> >>>> rcu_read_lock(); >>>> p = rcu_dereference(global_p); >>>> if (!p || !percpu_ref_tryget_live(&p->ref)) { >>>> /* Invalid pointer, go out */ >>>> } >>>> rcu_read_unlock(); >>>> /* use p */ >>>> percpu_ref_put(&p->ref); >>>> >>>> For this use case, it's not necessary to make percpu_ref_tryget_live() >>>> ACQUIRE operation. Because refcount doesn't act as a flag to indicate >>>> whether the object has been fully initialized, global_p does. And >>>> the data dependency guaranteed the required ordering. >>>> >>> >>> Yes this is spot on. >>> >>>> The use case of swap is different. Where global_p always points to >>>> the obj (never freed) even if the resources pointed by obj fields has >>>> been freed. And we want to use refcount as a flag to indicate whether >>>> the object is fully initialized. This is hard to be changed, because >>>> the global_p is used to identify the stalled pointer from the totally >>>> invalid pointer. >>>> >>> >>> Apologies ahead of time for this possibly dumb question. Is it possible >>> to have swapon swap out the global_p with >>> old_obj = rcu_access_pointer(global_p); >>> rcu_assign_pointer(global_p, obj); >>> kfree_rcu(remove_old_obj) or call_rcu(); >>> >>> Then the obj pointed to by global_p would always be valid, but only >>> would be alive again if it got the new pointer? >> >> Many thanks for both of you! Looks like a nice solution! Will try to do it in v2. >> Thanks again! :) > > Think about this again. This means that we need to free the old > swap_info_struct at some time. So something like RCU is needed to > enclose the accessor. But some accessor doesn't follow this, and it > appears overkill to change all these accessors. So I think at least as > the first step, smp_rmb() appears more appropriate. > Agree. Thanks! > Best Regards, > Huang, Ying > >>> >>>> If all other users follow the typical use case above, we may find some >>>> other way to resolve the problem inside swap code, such as adding >>>> smp_rmb() after percpu_ref_tryget_live(). >>>> >>> >>> I would prefer it. >>> >>>> Best Regards, >>>> Huang, Ying >>> >>> Thanks, >>> Dennis >>> >>> . >>> > > . >