Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp2682957pxy; Mon, 3 May 2021 05:53:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxv2A4P9KhL2jEUHMsiomc3Ba7gMeRY8XgsRtyTBhHH6oGoT7UOZa36MFUuSEDVHAC0HcpY X-Received: by 2002:aa7:860e:0:b029:28e:b4a9:297f with SMTP id p14-20020aa7860e0000b029028eb4a9297fmr2072457pfn.46.1620046398423; Mon, 03 May 2021 05:53:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620046398; cv=none; d=google.com; s=arc-20160816; b=yoWwFWHlmgfp7LOiJintRfsi3kS4hDwGdGAMBfowEB7evy2tK92QJ/4XIwyGs72G16 p3CddwUeCrbT03cpso+lv1SEaYRrIBVZ08oxc5Df9rBPaXBmsREozLgIrPfVPGcXK3kZ zz72NoQOFsjPApI/aHL36Qn1Vu57ukmBfS+qdM10Uy2TACBviQxESYaaTs2l/KzabAxz AvcPB0KrcsDRSiFAOybfA1S8cZRpgcm5vCzESY75Zu2Fhyr+tcsiEJEB2j+g+t07TFkP k6xRLYf3y98b6SIZO4kaZvMRB1rxIUxz6gdHyOoZlzU2TlvovruqbvkXSQ7JDbg3q34s udnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=NUe+bJicCTKtDgFwKY2vG97Y/s//m80Ae6stDdbaDkE=; b=kniiJvhZMroEAHHcaaY53iUBd3wj3R3KE3D8rGqHaaSFn/3kIkEIzKrR2lyBrAex0I cRCxaDGS92NZeVX7lDIhashxmObIyk1CX9s86ag6KqHdiNQHqPEm0vjQm8uOo5fITfpi PnxYI/mFV9f6JNpNoloKy0+TTshp57wK2lTEyqHyh/8do6sF8lOul3dt1jBEs2QLSX7m NLGEgiP5XNLY18CZ3FMqHz1dS0y7MPliauZ2Gy3sTF4lvY/foUrqOXZDv04jtY0CPN1f SEGmd0MJ+2KZcifven6Bmlm6oVX4QULcMSwHyW4b33pd/Qs4pzJaged0rhxdo3P41R5L 5VTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=jmBdJi6x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e25si3666157pgr.394.2021.05.03.05.53.06; Mon, 03 May 2021 05:53:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=jmBdJi6x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233494AbhECLet (ORCPT + 99 others); Mon, 3 May 2021 07:34:49 -0400 Received: from mail.kernel.org ([198.145.29.99]:52310 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233235AbhECLes (ORCPT ); Mon, 3 May 2021 07:34:48 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 826BB610E6; Mon, 3 May 2021 11:33:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620041635; bh=3kEzeSfbSHohcEIhFRR1rGeCpc1QY6vR9EG2U0DxTWA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=jmBdJi6x/MIR/9Em/yqUq5rtdm3uttsTZdp7+3sY2gUlc0Rq9SCfK41u1Tq8JMxrN CwOzUwXWLjfb0FSLNrCL3LF2jvrV18QsXNR40aMSU7AUJTf+g3A+BTe+cWm12JeBSZ UfgbNugzGtH5qxrdRin35bb7OxoTtcuckR7mfGd7kIeW/xnRkaNSkZkqQBy8/0Qukd zcGe7rlRN05fXskj5iuw4aQrGDaixha1BGZL12tfUMFvOBGRcEYE5OYkqYqKTLq+ji qiDPvrMoHXdUtgp5VoDw3W2rCf6CkYSXKNFXRgpyESwvZ+3mNFKjfmVsPfs/wWZ4iA Yp56al/1GlD4A== Date: Mon, 3 May 2021 14:33:43 +0300 From: Mike Rapoport To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Andrew Morton , "Michael S. Tsirkin" , Jason Wang , Alexey Dobriyan , "Matthew Wilcox (Oracle)" , Oscar Salvador , Michal Hocko , Roman Gushchin , Alex Shi , Steven Price , Mike Kravetz , Aili Yao , Jiri Bohac , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Naoya Horiguchi , linux-hyperv@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v1 7/7] fs/proc/kcore: use page_offline_(freeze|unfreeze) Message-ID: References: <20210429122519.15183-1-david@redhat.com> <20210429122519.15183-8-david@redhat.com> <5a5a7552-4f0a-75bc-582f-73d24afcf57b@redhat.com> <2f66cbfc-aa29-b3ef-4c6a-0da8b29b56f6@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2f66cbfc-aa29-b3ef-4c6a-0da8b29b56f6@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 03, 2021 at 12:13:45PM +0200, David Hildenbrand wrote: > On 03.05.21 11:28, Mike Rapoport wrote: > > On Mon, May 03, 2021 at 10:28:36AM +0200, David Hildenbrand wrote: > > > On 02.05.21 08:34, Mike Rapoport wrote: > > > > On Thu, Apr 29, 2021 at 02:25:19PM +0200, David Hildenbrand wrote: > > > > > Let's properly synchronize with drivers that set PageOffline(). Unfreeze > > > > > every now and then, so drivers that want to set PageOffline() can make > > > > > progress. > > > > > > > > > > Signed-off-by: David Hildenbrand > > > > > --- > > > > > fs/proc/kcore.c | 15 +++++++++++++++ > > > > > 1 file changed, 15 insertions(+) > > > > > > > > > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > > > > > index 92ff1e4436cb..3d7531f47389 100644 > > > > > --- a/fs/proc/kcore.c > > > > > +++ b/fs/proc/kcore.c > > > > > @@ -311,6 +311,7 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > > > > static ssize_t > > > > > read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) > > > > > { > > > > > + size_t page_offline_frozen = 0; > > > > > char *buf = file->private_data; > > > > > size_t phdrs_offset, notes_offset, data_offset; > > > > > size_t phdrs_len, notes_len; > > > > > @@ -509,6 +510,18 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) > > > > > pfn = __pa(start) >> PAGE_SHIFT; > > > > > page = pfn_to_online_page(pfn); > > > > > > > > Can't this race with page offlining for the first time we get here? > > > > > > > > > To clarify, we have three types of offline pages in the kernel ... > > > > > > a) Pages part of an offline memory section; the memap is stale and not > > > trustworthy. pfn_to_online_page() checks that. We *can* protect against > > > memory offlining using get_online_mems()/put_online_mems(), but usually > > > avoid doing so as the race window is very small (and a problem all over the > > > kernel we basically never hit) and locking is rather expensive. In the > > > future, we might switch to rcu to handle that more efficiently and avoiding > > > these possible races. > > > > > > b) PageOffline(): logically offline pages contained in an online memory > > > section with a sane memmap. virtio-mem calls these pages "fake offline"; > > > something like a "temporary" memory hole. The new mechanism I propose will > > > be used to handle synchronization as races can be more severe, e.g., when > > > reading actual page content here. > > > > > > c) Soft offline pages: hwpoisoned pages that are not actually harmful yet, > > > but could become harmful in the future. So we better try to remove the page > > > from the page allcoator and try to migrate away existing users. > > > > > > > > > So page_offline_* handle "b) PageOffline()" only. There is a tiny race > > > between pfn_to_online_page(pfn) and looking at the memmap as we have in many > > > cases already throughout the kernel, to be tackled in the future. > > > > Right, but here you anyway add locking, so why exclude the first iteration? > > What we're protecting is PageOffline() below. If I didn't mess up, we should > always be calling page_offline_freeze() before calling PageOffline(). Or am > I missing something? Somehow I was under impression we are protecting both pfn_to_online_page() and PageOffline(). > > BTW, did you consider something like > > Yes, I played with something like that. We'd have to handle the first > page_offline_freeze() freeze differently, though, and that's where things > got a bit ugly in my attempts. > > > > > if (page_offline_frozen++ % MAX_ORDER_NR_PAGES == 0) { > > page_offline_unfreeze(); > > cond_resched(); > > page_offline_freeze(); > > } > > > > We don't seem to care about page_offline_frozen overflows here, do we? > > No, the buffer size is also size_t and gets incremented on a per-byte basis. > The variant I have right now looked the cleanest to me. Happy to hear > simpler alternatives. Well, locking for the first time before the while() loop and doing resched-relock outside switch() would be definitely nicer, and it makes the last unlock unconditional. The cost of prevention of memory offline during reads of !KCORE_RAM parts does not seem that significant to me, but I may be missing something. -- Sincerely yours, Mike.