Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp2682962pxy; Mon, 3 May 2021 05:53:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzHQS9fabBncK6zfZ3jTmVUfvvw8/GMS/k+NNha2/7V9BEBqw0W+fZy9CrDJ9levDlMRWDK X-Received: by 2002:a17:90b:1201:: with SMTP id gl1mr3581229pjb.146.1620046398622; Mon, 03 May 2021 05:53:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620046398; cv=none; d=google.com; s=arc-20160816; b=TSwTqMqc18hmOLD1TXsL6TTcaf00E25f+Lh6xaoPxaPuTG163EmDa2QMwxcx3AjaQX VPMuDr5aOZS939WHxhcS3KB/PEokU+YggMGUnKmzNrGWp3YjJ2V7UswylryDl57jeLU+ qbAeCsBIWdb5jj22dn0bnRgrHf0f0JvhLijHqF8TNK+P6u/gco5OhUgBd5WoB8JLbCyH GE0fcF4VrUQlrVCcf4MQ4tmJ4eHMY4W7UYNDstPNrGx7Op0qVS359FJePp4a6aMgr/Kj LqFwMpImwxewxHgmj6ju+Xu2YhSkB61NbMeGiXid8yRD3SrJGMlisdFT/tvVIaR2r+Eo elxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:organization :from:references:cc:to:subject:dkim-signature; bh=WqkM0iWGl6jrVU5KuvimVV9SZx4Jlvp1gEzxVMJa670=; b=WbYm9ADMp4TbV1C3V1uIKAVsKbigmuEUEANZyvRcEr9ciGLP6+pD0uQM1loDWCDip/ h9qzyVkWkb3vdQhiNTk3QS9GHQtBGKQZisEIullOFnR7ec2rHgc99Nj6kmKQgjTl+9GA NdK29oilhog8gNcVqcdlbOEFlsr/z1BXUKYCaJouGpnfwBPTyKkMeMVvybvZyfxGyNFS fhvObmKTEdmp8v2COlxTGmqAo8U103HrdAyQxMHV7Au6wUQJ7xKEhXnGGYCC5mGrygCH SRimOv2BAYaUrj1vexZ0t1SzM1iR4K4DnZcHddg2EH0Qe3Q3uKbpC765utWpwCdtGQNP cSHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Znfu+wCq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t15si15545362plg.185.2021.05.03.05.53.06; Mon, 03 May 2021 05:53:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Znfu+wCq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233521AbhECLgu (ORCPT + 99 others); Mon, 3 May 2021 07:36:50 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:23254 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233324AbhECLgr (ORCPT ); Mon, 3 May 2021 07:36:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620041754; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WqkM0iWGl6jrVU5KuvimVV9SZx4Jlvp1gEzxVMJa670=; b=Znfu+wCqovV7p+O5sJ0a8DXfegTxLu31vLAzTj6nEBNCCeCWVhLThti5ujRrWZi7eQLqpG wKkDShR8smZU6iPT1G7OVQhdnfLGELNDAJGUQEVk3NKcDyNqumaWIH+/GXP/5JDTFgPDAB iyNZpwlrFe21d0CIGI6mDWJ4J3IAEIs= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-594-hjeKAlV2OFyXPCWs6jcDqQ-1; Mon, 03 May 2021 07:35:52 -0400 X-MC-Unique: hjeKAlV2OFyXPCWs6jcDqQ-1 Received: by mail-wr1-f71.google.com with SMTP id v2-20020a0560001622b0290106e28f8af8so3745432wrb.9 for ; Mon, 03 May 2021 04:35:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=WqkM0iWGl6jrVU5KuvimVV9SZx4Jlvp1gEzxVMJa670=; b=JgHtia1143y2Tahlfz4EqfkuN4K2+nOhVNAScVmTdFxy/5CITBmVmMVMWmQ0FcjHHJ UcNf1w8DEGNSZHYCZzAs+jXdBKnstqhYPsIFftetS0uzoBJPETEkQAIBVGzWoqbsccnX EF39SPyABUrFDlJpM4McsBrmzYGqyW3mSPSMJMMgL1mK6BjRkC2ou+tg2o5RSSoLniFj vfLF526CWeR4GnKxr6Y820ZPTOhJvEROzhYQd8421GKMH/GOAhrhnlz42NndHASUMFFv yEbZprHu1JoE3bJJsd+9N+xwDntkY+fN7RYi0VdOPN6+83QubLPY9kQBiNVBi+GuuPXr x/bw== X-Gm-Message-State: AOAM531hKkyZ3mTgWY0xyH7mlAK5QFwvp8+7PASVt4PCJTOLiqqkegVM QpOr0GF1k74EGz1AzrBLH+51BMD28G6sjFjM2ziKpmffrcpsa5WO8x3JvLBk6FxHTFWUD/hom1+ Y1twjo0L1TSr8i6vGULNJFTsU X-Received: by 2002:a7b:c0c4:: with SMTP id s4mr31118912wmh.174.1620041751214; Mon, 03 May 2021 04:35:51 -0700 (PDT) X-Received: by 2002:a7b:c0c4:: with SMTP id s4mr31118895wmh.174.1620041750887; Mon, 03 May 2021 04:35:50 -0700 (PDT) Received: from [192.168.3.132] (p5b0c649f.dip0.t-ipconnect.de. [91.12.100.159]) by smtp.gmail.com with ESMTPSA id l12sm11919551wrm.76.2021.05.03.04.35.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 May 2021 04:35:50 -0700 (PDT) Subject: Re: [PATCH v1 7/7] fs/proc/kcore: use page_offline_(freeze|unfreeze) To: Mike Rapoport Cc: linux-kernel@vger.kernel.org, Andrew Morton , "Michael S. Tsirkin" , Jason Wang , Alexey Dobriyan , "Matthew Wilcox (Oracle)" , Oscar Salvador , Michal Hocko , Roman Gushchin , Alex Shi , Steven Price , Mike Kravetz , Aili Yao , Jiri Bohac , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Naoya Horiguchi , linux-hyperv@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org References: <20210429122519.15183-1-david@redhat.com> <20210429122519.15183-8-david@redhat.com> <5a5a7552-4f0a-75bc-582f-73d24afcf57b@redhat.com> <2f66cbfc-aa29-b3ef-4c6a-0da8b29b56f6@redhat.com> From: David Hildenbrand Organization: Red Hat Message-ID: Date: Mon, 3 May 2021 13:35:49 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03.05.21 13:33, Mike Rapoport wrote: > On Mon, May 03, 2021 at 12:13:45PM +0200, David Hildenbrand wrote: >> On 03.05.21 11:28, Mike Rapoport wrote: >>> On Mon, May 03, 2021 at 10:28:36AM +0200, David Hildenbrand wrote: >>>> On 02.05.21 08:34, Mike Rapoport wrote: >>>>> On Thu, Apr 29, 2021 at 02:25:19PM +0200, David Hildenbrand wrote: >>>>>> Let's properly synchronize with drivers that set PageOffline(). Unfreeze >>>>>> every now and then, so drivers that want to set PageOffline() can make >>>>>> progress. >>>>>> >>>>>> Signed-off-by: David Hildenbrand >>>>>> --- >>>>>> fs/proc/kcore.c | 15 +++++++++++++++ >>>>>> 1 file changed, 15 insertions(+) >>>>>> >>>>>> diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c >>>>>> index 92ff1e4436cb..3d7531f47389 100644 >>>>>> --- a/fs/proc/kcore.c >>>>>> +++ b/fs/proc/kcore.c >>>>>> @@ -311,6 +311,7 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, >>>>>> static ssize_t >>>>>> read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) >>>>>> { >>>>>> + size_t page_offline_frozen = 0; >>>>>> char *buf = file->private_data; >>>>>> size_t phdrs_offset, notes_offset, data_offset; >>>>>> size_t phdrs_len, notes_len; >>>>>> @@ -509,6 +510,18 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) >>>>>> pfn = __pa(start) >> PAGE_SHIFT; >>>>>> page = pfn_to_online_page(pfn); >>>>> >>>>> Can't this race with page offlining for the first time we get here? >>>> >>>> >>>> To clarify, we have three types of offline pages in the kernel ... >>>> >>>> a) Pages part of an offline memory section; the memap is stale and not >>>> trustworthy. pfn_to_online_page() checks that. We *can* protect against >>>> memory offlining using get_online_mems()/put_online_mems(), but usually >>>> avoid doing so as the race window is very small (and a problem all over the >>>> kernel we basically never hit) and locking is rather expensive. In the >>>> future, we might switch to rcu to handle that more efficiently and avoiding >>>> these possible races. >>>> >>>> b) PageOffline(): logically offline pages contained in an online memory >>>> section with a sane memmap. virtio-mem calls these pages "fake offline"; >>>> something like a "temporary" memory hole. The new mechanism I propose will >>>> be used to handle synchronization as races can be more severe, e.g., when >>>> reading actual page content here. >>>> >>>> c) Soft offline pages: hwpoisoned pages that are not actually harmful yet, >>>> but could become harmful in the future. So we better try to remove the page >>>> from the page allcoator and try to migrate away existing users. >>>> >>>> >>>> So page_offline_* handle "b) PageOffline()" only. There is a tiny race >>>> between pfn_to_online_page(pfn) and looking at the memmap as we have in many >>>> cases already throughout the kernel, to be tackled in the future. >>> >>> Right, but here you anyway add locking, so why exclude the first iteration? >> >> What we're protecting is PageOffline() below. If I didn't mess up, we should >> always be calling page_offline_freeze() before calling PageOffline(). Or am >> I missing something? > > Somehow I was under impression we are protecting both pfn_to_online_page() > and PageOffline(). > >>> BTW, did you consider something like >> >> Yes, I played with something like that. We'd have to handle the first >> page_offline_freeze() freeze differently, though, and that's where things >> got a bit ugly in my attempts. >> >>> >>> if (page_offline_frozen++ % MAX_ORDER_NR_PAGES == 0) { >>> page_offline_unfreeze(); >>> cond_resched(); >>> page_offline_freeze(); >>> } >>> >>> We don't seem to care about page_offline_frozen overflows here, do we? >> >> No, the buffer size is also size_t and gets incremented on a per-byte basis. >> The variant I have right now looked the cleanest to me. Happy to hear >> simpler alternatives. > > Well, locking for the first time before the while() loop and doing > resched-relock outside switch() would be definitely nicer, and it makes the > last unlock unconditional. > > The cost of prevention of memory offline during reads of !KCORE_RAM parts > does not seem that significant to me, but I may be missing something. Also true, I'll have a look if I can just simplify that. -- Thanks, David / dhildenb