Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758483AbcJQS2Z (ORCPT ); Mon, 17 Oct 2016 14:28:25 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:35329 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1758431AbcJQS2R (ORCPT ); Mon, 17 Oct 2016 14:28:17 -0400 From: "Aneesh Kumar K.V" To: Jan Stancek , Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, hillf zj , dave hansen , kirill shutemov , mhocko@suse.cz, n-horiguchi@ah.jp.nec.com, iamjoonsoo kim Subject: Re: [bug/regression] libhugetlbfs testsuite failures and OOMs eventually kill my system In-Reply-To: <472921348.43188.1476715444366.JavaMail.zimbra@redhat.com> References: <57FF7BB4.1070202@redhat.com> <277142fc-330d-76c7-1f03-a1c8ac0cf336@oracle.com> <58009BE2.5010805@redhat.com> <0c9e132e-694c-17cd-1890-66fcfd2e8a0d@oracle.com> <472921348.43188.1476715444366.JavaMail.zimbra@redhat.com> Date: Mon, 17 Oct 2016 23:57:05 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16101718-0028-0000-0000-000005D4361A X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00005929; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000187; SDB=6.00769428; UDB=6.00368630; IPR=6.00545843; BA=6.00004812; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00013019; XFM=3.00000011; UTC=2016-10-17 18:28:14 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16101718-0029-0000-0000-0000301AD8D8 Message-Id: <87h98a96h2.fsf@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-10-17_08:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=5 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1610170317 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2291 Lines: 62 Jan Stancek writes: > Hi Mike, > > Revert of 67961f9db8c4 helps, I let whole suite run for 100 iterations, > there were no issues. > > I cut down reproducer and removed last mmap/write/munmap as that is enough > to reproduce the problem. Then I started introducing some traces into kernel > and noticed that on ppc I get 3 faults, while on x86 I get only 2. > > Interesting is the 2nd fault, that is first write after mapping as PRIVATE. > Following condition fails on ppc first time: > if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) { > but it's immediately followed by fault that looks identical > and in that one it evaluates as true. ok, we miss the _PAGE_PTE in new_pte there. new_pte = make_huge_pte(vma, page, ((vma->vm_flags & VM_WRITE) && (vma->vm_flags & VM_SHARED))); set_huge_pte_at(mm, address, ptep, new_pte); hugetlb_count_add(pages_per_huge_page(h), mm); if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { /* Optimization, do the COW without a second fault */ ret = hugetlb_cow(mm, vma, address, ptep, new_pte, page, ptl); } IMHO that new_pte usage is wrong, because we don't consider flags that can possibly be added by set_huge_pte_at there. For pp64 we add _PAGE_PTE > > Same with alloc_huge_page(), on x86_64 it's called twice, on ppc three times. > In 2nd call vma_needs_reservation() returns 0, in 3rd it returns 1. > > ---- ppc -> 2nd and 3rd fault --- > mmap(MAP_PRIVATE) > hugetlb_fault address: 3effff000000, flags: 55 > hugetlb_cow old_page: f0000000010fc000 > alloc_huge_page ret: f000000001100000 > hugetlb_cow ptep: c000000455b27cf8, pte_same: 0 > free_huge_page page: f000000001100000, restore_reserve: 1 > hugetlb_fault address: 3effff000000, flags: 55 > hugetlb_cow old_page: f0000000010fc000 > alloc_huge_page ret: f000000001100000 > hugetlb_cow ptep: c000000455b27cf8, pte_same: 1 > > --- x86_64 -> 2nd fault --- > mmap(MAP_PRIVATE) > hugetlb_fault address: 7f71a4200000, flags: 55 > hugetlb_cow address 0x7f71a4200000, old_page: ffffea0008d20000 > alloc_huge_page ret: ffffea0008d38000 > hugetlb_cow ptep: ffff8802314c7908, pte_same: 1 > But I guess we still have issue with respecting reservation here. I will look at _PAGE_PTE and see what best we can do w.r.t hugetlb. -aneesh