Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755295AbbFKVCk (ORCPT ); Thu, 11 Jun 2015 17:02:40 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:38976 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754861AbbFKVCZ (ORCPT ); Thu, 11 Jun 2015 17:02:25 -0400 From: Mike Kravetz To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Naoya Horiguchi , David Rientjes , Hugh Dickins , Davidlohr Bueso , Aneesh Kumar , Hillf Danton , Christoph Hellwig , Mike Kravetz Subject: [RFC v4 PATCH 2/9] mm/hugetlb: expose hugetlb fault mutex for use by fallocate Date: Thu, 11 Jun 2015 14:01:33 -0700 Message-Id: <1434056500-2434-3-git-send-email-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1434056500-2434-1-git-send-email-mike.kravetz@oracle.com> References: <1434056500-2434-1-git-send-email-mike.kravetz@oracle.com> X-Source-IP: aserv0021.oracle.com [141.146.126.233] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3702 Lines: 100 hugetlb page faults are currently synchronized by the table of mutexes (htlb_fault_mutex_table). fallocate code will need to synchronize with the page fault code when it allocates or deletes pages. Expose interfaces so that fallocate operations can be synchronized with page faults. Signed-off-by: Mike Kravetz --- include/linux/hugetlb.h | 10 ++++++++++ mm/hugetlb.c | 20 ++++++++++++++++---- 2 files changed, 26 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2050261..bbd072e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -85,6 +85,16 @@ int dequeue_hwpoisoned_huge_page(struct page *page); bool isolate_huge_page(struct page *page, struct list_head *list); void putback_active_hugepage(struct page *page); void free_huge_page(struct page *page); +u32 hugetlb_fault_mutex_shared_hash(struct address_space *mapping, pgoff_t idx); +extern struct mutex *htlb_fault_mutex_table; +static inline void hugetlb_fault_mutex_lock(u32 hash) +{ + mutex_lock(&htlb_fault_mutex_table[hash]); +} +static inline void hugetlb_fault_mutex_unlock(u32 hash) +{ + mutex_unlock(&htlb_fault_mutex_table[hash]); +} #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3fc2359..f617cb6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -64,7 +64,7 @@ DEFINE_SPINLOCK(hugetlb_lock); * prevent spurious OOMs when the hugepage pool is fully utilized. */ static int num_fault_mutexes; -static struct mutex *htlb_fault_mutex_table ____cacheline_aligned_in_smp; +struct mutex *htlb_fault_mutex_table ____cacheline_aligned_in_smp; /* Forward declaration */ static int hugetlb_acct_memory(struct hstate *h, long delta); @@ -3324,7 +3324,8 @@ static u32 fault_mutex_hash(struct hstate *h, struct mm_struct *mm, unsigned long key[2]; u32 hash; - if (vma->vm_flags & VM_SHARED) { + /* !vma implies this was called from hugetlbfs fallocate code */ + if (!vma || vma->vm_flags & VM_SHARED) { key[0] = (unsigned long) mapping; key[1] = idx; } else { @@ -3350,6 +3351,17 @@ static u32 fault_mutex_hash(struct hstate *h, struct mm_struct *mm, } #endif +/* + * Interface for use by hugetlbfs fallocate code. Faults must be + * synchronized with page adds or deletes by fallocate. fallocate + * only deals with shared mappings. See also hugetlb_fault_mutex_lock + * and hugetlb_fault_mutex_unlock. + */ +u32 hugetlb_fault_mutex_shared_hash(struct address_space *mapping, pgoff_t idx) +{ + return fault_mutex_hash(NULL, NULL, NULL, mapping, idx, 0); +} + int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags) { @@ -3390,7 +3402,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * the same page in the page cache. */ hash = fault_mutex_hash(h, mm, vma, mapping, idx, address); - mutex_lock(&htlb_fault_mutex_table[hash]); + hugetlb_fault_mutex_lock(hash); entry = huge_ptep_get(ptep); if (huge_pte_none(entry)) { @@ -3473,7 +3485,7 @@ out_ptl: put_page(pagecache_page); } out_mutex: - mutex_unlock(&htlb_fault_mutex_table[hash]); + hugetlb_fault_mutex_unlock(hash); /* * Generally it's safe to hold refcount during waiting page lock. But * here we just wait to defer the next page fault to avoid busy loop and -- 2.1.0 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/