Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp693507iob; Thu, 12 May 2022 02:38:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxTAjKsQmZFPiAymnVzraWy/NSjWEtTGwFokfpAsKTutNbN9ozeXTDkWTA4UR4y0qdBE+Zd X-Received: by 2002:a63:d450:0:b0:3c6:e382:c13b with SMTP id i16-20020a63d450000000b003c6e382c13bmr11580101pgj.138.1652348313559; Thu, 12 May 2022 02:38:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652348313; cv=none; d=google.com; s=arc-20160816; b=SqdKwMXehA6RgkXc1G++Be1ATKnzbVVHRczWT4NE8nS0TGxFMb6zJ5uucpd/KO8fmi aqZXNwRUEKB2P6aA87hZ9Aj4Yngg0dEKCSAs4yrMNubPCTZnsuQfyAX6ra8Wthov8IJK oWOwBN9/4C/IrM+pSE7zGxRjv7SLJv63hY2qOny9BOVReC5P0iOqaQCgolUgFIOYhxgy HRRkLxOmAkayCRQ9sB/kRn7ffH0+su0/OOXw8IChlDBe0tc4qKnnTY1C4mJgk9BYjKMY tVwp+LJaUYYvO9SGVi3wXoZTLfYklsILgTGuR5iY+1RlyOrNtDxN4gHppdxgDfEjqRzl OXjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=oi6okLW+Xg85fRLL/PidfNWIiYKjelDe/D0uPtvjzRE=; b=qeSnwGRQTvgqAtgpT7Tjt83/XVeYzx/HKYa7hv2/WUi0t+q5bguUuUNMtzJs+LPk2G LQumg5AjGpFa+s7QV+UTATKgp3nkoGMt0x5OzF7gJdmTp8Yd3HFueOeaeEod0ii3zv5M l8T5igLYTw50GAxY3TOMRBXjpkZb82fbM97DhDUQOk/63ad6ADGT2a1uBSNXy7WaG29K 4NM/VfS4D6CdWfaZrCf4dIQJCOw0xaSZdTNhp73UzjreiOa0krZxiSyQ5zDdviA/YD8W ddAybi42frYFl3xouRDCFES6d2dUHVjHpQlvjx5aw+lXGsUEPEyLS1muInGgfFwmLMqQ ID+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=k6d9rbL1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u6-20020a17090282c600b00153b2d165e8si5357325plz.496.2022.05.12.02.38.20; Thu, 12 May 2022 02:38:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=k6d9rbL1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349751AbiELAt5 (ORCPT + 99 others); Wed, 11 May 2022 20:49:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345699AbiELAty (ORCPT ); Wed, 11 May 2022 20:49:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4875066FAA for ; Wed, 11 May 2022 17:49:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CCBDBB825FE for ; Thu, 12 May 2022 00:49:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5DD4DC340EE; Thu, 12 May 2022 00:49:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652316590; bh=eWke+NN1Ar4ZNDhe1jOCTRWa0cwyvdm8F4X6YA+8kK8=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=k6d9rbL12uyyjp74yBjzKvcn4ZUeTVKs87va/EwFEHTRQSsy29qeTg7zvvtrx+Jw4 /hGLSxt0gJgUrV4qnDkcuwLNquYV+10gydxteNl50VwNuSs2GTZ2SIPKobtnqNGdyV xhvx038uhHTXSNQ4wEFYMpwFZrTkNn6f8wSgQ4jGkmgxr6+mnbFSO9Qlu70DSsxkVj agk2G4k2wnHP20Wou776/LGM+O+D4pVzIK07eTORqoW+de/RAt57+1VleS9ZftLJPE piZFMJijh94kkFSbhhUAJM2TZgBFlMpxkJLk3SaWArNA8YOORmWTlIADNUNcomO6GY IZsFaAKa2ml2Q== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 06F3E5C05FC; Wed, 11 May 2022 17:49:50 -0700 (PDT) Date: Wed, 11 May 2022 17:49:49 -0700 From: "Paul E. McKenney" To: John Hubbard Cc: Minchan Kim , Andrew Morton , linux-mm , LKML , John Dias , David Hildenbrand Subject: Re: [PATCH v4] mm: fix is_pinnable_page against on cma page Message-ID: <20220512004949.GK1790663@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <8f083802-7ab0-15ec-b37d-bc9471eea0b1@nvidia.com> <20220511234534.GG1790663@paulmck-ThinkPad-P17-Gen-1> <0d90390c-3624-4f93-f8bd-fb29e92237d3@nvidia.com> <20220512002207.GJ1790663@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 11, 2022 at 05:34:52PM -0700, John Hubbard wrote: > On 5/11/22 17:26, Minchan Kim wrote: > > > > Let me try to say this more clearly: I don't think that the following > > > > __READ_ONCE() statement can actually help anything, given that > > > > get_pageblock_migratetype() is non-inlined: > > > > > > > > + int __mt = get_pageblock_migratetype(page); > > > > + int mt = __READ_ONCE(__mt); > > > > + > > > > + if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) > > > > + return false; > > > > > > > > > > > > Am I missing anything here? > > > > > > In the absence of future aggression from link-time optimizations (LTO), > > > you are missing nothing. > > > > A thing I want to note is Android kernel uses LTO full mode. > > Thanks Paul for explaining the state of things. > > Minchan, how about something like very close to your original draft, > then, but with a little note, and the "&" as well: > > int __mt = get_pageblock_migratetype(page); > > /* > * Defend against future compiler LTO features, or code refactoring > * that inlines the above function, by forcing a single read. Because, this > * routine races with set_pageblock_migratetype(), and we want to avoid > * reading zero, when actually one or the other flags was set. > */ > int mt = __READ_ONCE(__mt); > > if (mt & (MIGRATE_CMA | MIGRATE_ISOLATE)) > return false; > > > ...which should make everyone comfortable and protected from the > future sins of the compiler and linker teams? :) This would work, but it would force a store to the stack and an immediate reload. Which might be OK on this code path. But using READ_ONCE() in (I think?) __get_pfnblock_flags_mask() would likely generate the same code that is produced today. word = READ_ONCE(bitmap[word_bitidx]); But I could easily have missed a turn in that cascade of functions. ;-) Or there might be some code path that really hates a READ_ONCE() in that place. Thanx, Paul