Received: by 2002:a4a:311b:0:0:0:0:0 with SMTP id k27-v6csp4805828ooa; Tue, 14 Aug 2018 10:48:12 -0700 (PDT) X-Google-Smtp-Source: AA+uWPzRLOWJwQkza4PfmveWz3IOfCFp7Iy1nYCvILOs5CLUB4GH8pVuOZI9YzqYex1Mv0BbBEXX X-Received: by 2002:a63:df04:: with SMTP id u4-v6mr21855425pgg.434.1534268892480; Tue, 14 Aug 2018 10:48:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534268892; cv=none; d=google.com; s=arc-20160816; b=B/QdJ7fYDZCF0jdPOXgTuTGbPaH3esiI3mIvzk+ZCFq+sULmvLdx79tRn9Pb3022CU rvlcrOq9lhgBfdFTMQ4p4jywlJMH72h85Xr+i95VVq7dFNJF9XvfN2ZdUaMW+YfYengg 9L9Seck8oQNWpZQm+J8Bs7XdZd/dFC3pbqS7vLkwbcso3v/E5e+jcMATmuioV0+ru6t/ yowXp3qcTjyv73uI/4YJ7gSQbkefuAb6mS6oPj/gA/KKpWeL1lfya1kx6R/q1bwtXTza xs7c6/ic6IvyxOKKDW9n3lSJlkeCMUlChb5pg2aWTTbtdAuw8uDhvZsRXGGzQbcboa4s 2eYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=MIapU2fs4MPpx7Ou5gsubb8Tfk8nX0rwCkozDrXTmPE=; b=vqEskhbK77mj5n+yD+Jdjy+kr3Z1E6EKstyP2O011QEwy5RsdDl0/5ONPD0qbTFuqF AzV/CrjyuXl0RCYo4qmEFTqVyFp2VivkRafMz7AH/8Fb+AEfPDVC3wqQ+PnChZk+Nzsb NBOjyuXMoK9bUh5Y6TeBEKDfwZ+p6IrNDiMwag+Zznqr8koDLRnbtMYW10J4t0Hcl8BC GDfLbFBUCfDi6rZjNFvbYfEPEhN/iAOmnfftgn/IHPFRqPjfRcbThrYxx/kCbl0Sn5HA 8fu3AQLKF5R70FjGj/5yic/k0cbWRX2gIuBTEBXSYDGVlMLVxWv/p5t1wl+QWvBTi2To Trzg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t4-v6si16666587plb.498.2018.08.14.10.47.57; Tue, 14 Aug 2018 10:48:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391141AbeHNUfD (ORCPT + 99 others); Tue, 14 Aug 2018 16:35:03 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:60594 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390379AbeHNUfC (ORCPT ); Tue, 14 Aug 2018 16:35:02 -0400 Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1C6D3D1A; Tue, 14 Aug 2018 17:46:49 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Naoya Horiguchi , Zi Yan , Dave Hansen , "H. Peter Anvin" , Anshuman Khandual , David Nellans , Ingo Molnar , "Kirill A. Shutemov" , Mel Gorman , Minchan Kim , Thomas Gleixner , Vlastimil Babka , Andrea Arcangeli , Michal Hocko , Andrew Morton , Linus Torvalds , David Woodhouse , Guenter Roeck Subject: [PATCH 4.4 24/43] mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1 Date: Tue, 14 Aug 2018 19:18:00 +0200 Message-Id: <20180814171518.700511001@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180814171517.014285600@linuxfoundation.org> References: <20180814171517.014285600@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Naoya Horiguchi commit eee4818baac0f2b37848fdf90e4b16430dc536ac upstream _PAGE_PSE is used to distinguish between a truly non-present (_PAGE_PRESENT=0) PMD, and a PMD which is undergoing a THP split and should be treated as present. But _PAGE_SWP_SOFT_DIRTY currently uses the _PAGE_PSE bit, which would cause confusion between one of those PMDs undergoing a THP split, and a soft-dirty PMD. Dropping _PAGE_PSE check in pmd_present() does not work well, because it can hurt optimization of tlb handling in thp split. Thus, we need to move the bit. In the current kernel, bits 1-4 are not used in non-present format since commit 00839ee3b299 ("x86/mm: Move swap offset/type up in PTE to work around erratum"). So let's move _PAGE_SWP_SOFT_DIRTY to bit 1. Bit 7 is used as reserved (always clear), so please don't use it for other purpose. [dwmw2: Pulled in to 4.9 backport to support L1TF changes] Link: http://lkml.kernel.org/r/20170717193955.20207-3-zi.yan@sent.com Signed-off-by: Naoya Horiguchi Signed-off-by: Zi Yan Acked-by: Dave Hansen Cc: "H. Peter Anvin" Cc: Anshuman Khandual Cc: David Nellans Cc: Ingo Molnar Cc: Kirill A. Shutemov Cc: Mel Gorman Cc: Minchan Kim Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: Andrea Arcangeli Cc: Michal Hocko Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: David Woodhouse Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/pgtable_64.h | 12 +++++++++--- arch/x86/include/asm/pgtable_types.h | 10 +++++----- 2 files changed, 14 insertions(+), 8 deletions(-) --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -166,15 +166,21 @@ static inline int pgd_large(pgd_t pgd) { /* * Encode and de-code a swap entry * - * | ... | 11| 10| 9|8|7|6|5| 4| 3|2|1|0| <- bit number - * | ... |SW3|SW2|SW1|G|L|D|A|CD|WT|U|W|P| <- bit names - * | OFFSET (14->63) | TYPE (9-13) |0|X|X|X| X| X|X|X|0| <- swp entry + * | ... | 11| 10| 9|8|7|6|5| 4| 3|2| 1|0| <- bit number + * | ... |SW3|SW2|SW1|G|L|D|A|CD|WT|U| W|P| <- bit names + * | OFFSET (14->63) | TYPE (9-13) |0|0|X|X| X| X|X|SD|0| <- swp entry * * G (8) is aliased and used as a PROT_NONE indicator for * !present ptes. We need to start storing swap entries above * there. We also need to avoid using A and D because of an * erratum where they can be incorrectly set by hardware on * non-present PTEs. + * + * SD (1) in swp entry is used to store soft dirty bit, which helps us + * remember soft dirty over page migration + * + * Bit 7 in swp entry should be 0 because pmd_present checks not only P, + * but also L and G. */ #define SWP_TYPE_FIRST_BIT (_PAGE_BIT_PROTNONE + 1) #define SWP_TYPE_BITS 5 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -70,15 +70,15 @@ /* * Tracking soft dirty bit when a page goes to a swap is tricky. * We need a bit which can be stored in pte _and_ not conflict - * with swap entry format. On x86 bits 6 and 7 are *not* involved - * into swap entry computation, but bit 6 is used for nonlinear - * file mapping, so we borrow bit 7 for soft dirty tracking. + * with swap entry format. On x86 bits 1-4 are *not* involved + * into swap entry computation, but bit 7 is used for thp migration, + * so we borrow bit 1 for soft dirty tracking. * * Please note that this bit must be treated as swap dirty page - * mark if and only if the PTE has present bit clear! + * mark if and only if the PTE/PMD has present bit clear! */ #ifdef CONFIG_MEM_SOFT_DIRTY -#define _PAGE_SWP_SOFT_DIRTY _PAGE_PSE +#define _PAGE_SWP_SOFT_DIRTY _PAGE_RW #else #define _PAGE_SWP_SOFT_DIRTY (_AT(pteval_t, 0)) #endif