Received: by 10.223.185.116 with SMTP id b49csp5055664wrg; Wed, 7 Mar 2018 05:47:29 -0800 (PST) X-Google-Smtp-Source: AG47ELtA/ZwmydPm/rufmcbF6TAXyfGv6yIJSXU45DjVW/f3m3FMEbUlvRunAmJSbYhNw3F0oTXC X-Received: by 2002:a17:902:41:: with SMTP id 59-v6mr20304874pla.48.1520430449834; Wed, 07 Mar 2018 05:47:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520430449; cv=none; d=google.com; s=arc-20160816; b=0rOXQujbmf3QnXJRSgjHD5jMsCZ6KnesNqqzGWG2fKveDseLElj250Ad2ILwDn8tVS he7ILiAtxsugql+Z1PQqOLvAD1GJswGt0pAnqPOolngZOVRXZ0GV46uXde6yb6T0VGFy TZfWCk40mWBqPa+zNDFnfYzWgLica1CbJLS+JdF1zxdZAkRIsBkf/EVDvXC7VdLRHPg1 CD+pd31Em80jcf7W8V1mgJSmMexfekjcNDtH+sv6gTGSCwrAzvLqlYx9kc7i77lIOG90 Lphw/c6Kln8dso1gBg0B6Hqa0w53E2EhSfIYKMY1oQqiapHxD1ovfZ7qbtgVOaOQcNLj 8I0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=uRDwGtujhK/3XtXR86b0bab+lsCH1qfVsfz5lpvqOjs=; b=l63iep32wH1DsPVsZ9gAKBZbPFy2IDOK4rvGt26pwVxZHjmKGdMcwaxrNyxPocOqkm Nl2PNQjPl0ZeLScv4rkg9lpeSdlrB0dJz9NMGJAe+v52Eph5DPB8LCYaqfrJVzcucJAg nvawhKe4dMoSgPviuvgXvB2dgQSIyeZt4sfQ/fW+dLeWc6sUsVSQelY/Fq8F7tXUrXYh rkRDxj5xAJKqtdjeeT0vXLfHqzjZWMy3OHumNvMq4mwycJs2dEVKV6DgyPqh7IqwPqfG VXYPj6UktGnPvblvExsj6PHzwQxTeF6Nf4QNw+cYBC62i4kWnHPG5o5SF9KQ7yvtExnx jI9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=j8pRGO+C; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v8-v6si12911747plk.221.2018.03.07.05.47.15; Wed, 07 Mar 2018 05:47:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=j8pRGO+C; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754540AbeCGNpv (ORCPT + 99 others); Wed, 7 Mar 2018 08:45:51 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:59048 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751151AbeCGNor (ORCPT ); Wed, 7 Mar 2018 08:44:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=uRDwGtujhK/3XtXR86b0bab+lsCH1qfVsfz5lpvqOjs=; b=j8pRGO+Clm1BthPW1g6BmiByx +8nw/MKDOJEQlMSAZgealOBlEYeV4CJ7vMRt+154dQdDHOjXIM8vjhlS65PeRhIAZRA2zggYOuWhN jidBOltvlOIcxP6Z7p8WjY3Pvl/8bz9XahwuDRrZStn4Zaokp2+DaZPm7p/5KqT+2ozumaKCt/YpK Wu+lHwNB/Xv3cB/FOjZ4E19rkhXb2/7r70ntb+E9NMOEG4sSKSRF/XrLqWBm9sPnJkjRhts561Dtj HCZEn6nm1i4WnLiSpGeV9FnLHpzQ2wH4XPwUQdtK1TqPbO67gvXk+pkf3J8Yj2C6TN34RQRmY8tFa JerOoqq5A==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1etZN8-0008WQ-L1; Wed, 07 Mar 2018 13:44:46 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , linux-kernel@vger.kernel.org Subject: [PATCH v5 1/4] s390: Use _refcount for pgtables Date: Wed, 7 Mar 2018 05:44:40 -0800 Message-Id: <20180307134443.32646-2-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180307134443.32646-1-willy@infradead.org> References: <20180307134443.32646-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Matthew Wilcox s390 borrows the storage used for _mapcount in struct page in order to account whether the bottom or top half is being used for 2kB page tables. I want to use that for something else, so use the top byte of _refcount instead of the bottom byte of _mapcount. _refcount may temporarily be incremented by other CPUs that see a stale pointer to this page in the page cache, but each CPU can only increment it by one, and there are no systems with 2^24 CPUs today, so they will not change the upper byte of _refcount. We do have to be a little careful not to lose any of their writes (as they will subsequently decrement the counter). Signed-off-by: Matthew Wilcox Acked-by: Martin Schwidefsky --- arch/s390/mm/pgalloc.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index cb364153c43c..412c5f48a8e7 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -189,14 +189,15 @@ unsigned long *page_table_alloc(struct mm_struct *mm) if (!list_empty(&mm->context.pgtable_list)) { page = list_first_entry(&mm->context.pgtable_list, struct page, lru); - mask = atomic_read(&page->_mapcount); + mask = atomic_read(&page->_refcount) >> 24; mask = (mask | (mask >> 4)) & 3; if (mask != 3) { table = (unsigned long *) page_to_phys(page); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->_mapcount, 1U << bit); + atomic_xor_bits(&page->_refcount, + 1U << (bit + 24)); list_del(&page->lru); } } @@ -217,12 +218,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = (unsigned long *) page_to_phys(page); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - atomic_set(&page->_mapcount, 3); + atomic_xor_bits(&page->_refcount, 3 << 24); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_set(&page->_mapcount, 1); + atomic_xor_bits(&page->_refcount, 1 << 24); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); list_add(&page->lru, &mm->context.pgtable_list); @@ -241,7 +242,8 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) /* Free 2K page table fragment of a 4K page */ bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); spin_lock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_mapcount, 1U << bit); + mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24)); + mask >>= 24; if (mask & 3) list_add(&page->lru, &mm->context.pgtable_list); else @@ -252,7 +254,6 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) } pgtable_page_dtor(page); - atomic_set(&page->_mapcount, -1); __free_page(page); } @@ -273,7 +274,8 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, } bit = (__pa(table) & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); spin_lock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_mapcount, 0x11U << bit); + mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask >>= 24; if (mask & 3) list_add_tail(&page->lru, &mm->context.pgtable_list); else @@ -295,12 +297,13 @@ static void __tlb_remove_table(void *_table) break; case 1: /* lower 2K of a 4K page table */ case 2: /* higher 2K of a 4K page table */ - if (atomic_xor_bits(&page->_mapcount, mask << 4) != 0) + mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); + mask >>= 24; + if (mask != 0) break; /* fallthrough */ case 3: /* 4K page table with pgstes */ pgtable_page_dtor(page); - atomic_set(&page->_mapcount, -1); __free_page(page); break; } -- 2.16.1