Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp715670imm; Wed, 29 Aug 2018 10:19:30 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdb0idqEdr1TY/pv+Kzb2NwahkxY1PT9rbr8YnS1mALroaksSNNZUGeFrI/hsKhddaXydU2W X-Received: by 2002:a17:902:5a4e:: with SMTP id f14-v6mr1339948plm.311.1535563170402; Wed, 29 Aug 2018 10:19:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535563170; cv=none; d=google.com; s=arc-20160816; b=m9rh1WRvBT25fko9ogVMpChXYJYR20KpXsZF02o5tWHgk0heHpQAJ0EnGAHMppX0ce 9v9Zv0XhCDpLxlLb+kg7Me1EOLNkQEF7OepENuQmQb9QoYlC8/OBYdL9EiAoPgHGKmR+ nzERCTCm8cm3C0JZSy5AO5ZH5rhBxBiIfBD6lcdZLyJKeXD3HkAuSp6DkFWm7LcGv7wb YT3Ya2aMwD9RblE5mocXyxmNaA7iqlUaw5EKxVRX5zM1VwtAod6SxwIn5acrN8eG815G 9UqZJLwTXzB2Vp52sWo10MFP0UpXfV8Q6vYFbklmbQm5lpFOMScQRPENO5SwaHE2ngX8 EwbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:message-id:date:subject:cc:to:from :arc-authentication-results; bh=BPWHH1t+Z1v1WovqYCeSSSvESua0KHf30y3KnaHUhWc=; b=mj5LGKIWWrcaub0CixQMKNHWPJE1fsNlRDlzuL5Dvgd5H8si8E9wyy1h9BEn5iZ0vZ OKrSXzFf4Jfcymwbq3YFA+2rn2xFe6LQIMcAvYZ4g6N7w/+JFC6a3OsK9qm/gNNQEIYn /D1hRymD02SUjZcIaX+L2tRr211S81wcuKLIgo5YpK5YtI4CUhe1CVtNjuOOSxxHc492 2aM8oNlFoBtD02Xw3ntqYIglcahDqjfcRUGYgYxHdl/W1CC9wz9ed+C/uuIi3NnSG1x1 2xh0uHm3SnSqAJsKRyX3jtRS0EXfaounICMZHwGwOPbch8o18GSysT9SZ1D3/km0E1py BEEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n70-v6si4275102pfa.320.2018.08.29.10.19.14; Wed, 29 Aug 2018 10:19:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728245AbeH2VPz (ORCPT + 99 others); Wed, 29 Aug 2018 17:15:55 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:46766 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727704AbeH2VPy (ORCPT ); Wed, 29 Aug 2018 17:15:54 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id AB64E738F1; Wed, 29 Aug 2018 17:17:59 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-126-69.rdu2.redhat.com [10.10.126.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7776B278D; Wed, 29 Aug 2018 17:17:58 +0000 (UTC) From: jglisse@redhat.com To: linux-mm@kvack.org Cc: Andrew Morton , linux-kernel@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Aneesh Kumar K . V" , Zi Yan , Michal Hocko , Ralph Campbell , John Hubbard Subject: [PATCH 4/7] mm/hmm: properly handle migration pmd v2 Date: Wed, 29 Aug 2018 13:17:49 -0400 Message-Id: <20180829171749.9365-1-jglisse@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 In-Reply-To: <20180824192549.30844-5-jglisse@redhat.com> Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Wed, 29 Aug 2018 17:17:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Wed, 29 Aug 2018 17:17:59 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jglisse@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jérôme Glisse Before this patch migration pmd entry (!pmd_present()) would have been treated as a bad entry (pmd_bad() returns true on migration pmd entry). The outcome was that device driver would believe that the range covered by the pmd was bad and would either SIGBUS or simply kill all the device's threads (each device driver decide how to react when the device tries to access poisonnous or invalid range of memory). This patch explicitly handle the case of migration pmd entry which are non present pmd entry and either wait for the migration to finish or report empty range (when device is just trying to pre- fill a range of virtual address and thus do not want to wait or trigger page fault). Changed since v1: - use is_pmd_migration_entry() instead of open coding the equivalent. Signed-off-by: Aneesh Kumar K.V Signed-off-by: Jérôme Glisse Cc: Zi Yan Cc: Michal Hocko Cc: Ralph Campbell Cc: John Hubbard Cc: Andrew Morton --- mm/hmm.c | 42 ++++++++++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 6 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index a16678d08127..fd3d19d98070 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -577,22 +577,44 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, { struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; + struct vm_area_struct *vma = walk->vma; uint64_t *pfns = range->pfns; unsigned long addr = start, i; pte_t *ptep; + pmd_t pmd; - i = (addr - range->start) >> PAGE_SHIFT; again: - if (pmd_none(*pmdp)) + pmd = READ_ONCE(*pmdp); + if (pmd_none(pmd)) return hmm_vma_walk_hole(start, end, walk); - if (pmd_huge(*pmdp) && (range->vma->vm_flags & VM_HUGETLB)) + if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB)) return hmm_pfns_bad(start, end, walk); - if (pmd_devmap(*pmdp) || pmd_trans_huge(*pmdp)) { - pmd_t pmd; + if (is_pmd_migration_entry(pmd)) { + swp_entry_t entry = pmd_to_swp_entry(pmd); + + bool fault, write_fault; + unsigned long npages; + uint64_t *pfns; + + i = (addr - range->start) >> PAGE_SHIFT; + npages = (end - addr) >> PAGE_SHIFT; + pfns = &range->pfns[i]; + hmm_range_need_fault(hmm_vma_walk, pfns, npages, + 0, &fault, &write_fault); + if (fault || write_fault) { + hmm_vma_walk->last = addr; + pmd_migration_entry_wait(vma->vm_mm, pmdp); + return -EAGAIN; + } + return 0; + } else if (!pmd_present(pmd)) + return hmm_pfns_bad(start, end, walk); + + if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { /* * No need to take pmd_lock here, even if some other threads * is splitting the huge pmd we will get that event through @@ -607,13 +629,21 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd)) goto again; + i = (addr - range->start) >> PAGE_SHIFT; return hmm_vma_handle_pmd(walk, addr, end, &pfns[i], pmd); } - if (pmd_bad(*pmdp)) + /* + * We have handled all the valid case above ie either none, migration, + * huge or transparent huge. At this point either it is a valid pmd + * entry pointing to pte directory or it is a bad pmd that will not + * recover. + */ + if (pmd_bad(pmd)) return hmm_pfns_bad(start, end, walk); ptep = pte_offset_map(pmdp, addr); + i = (addr - range->start) >> PAGE_SHIFT; for (; addr < end; addr += PAGE_SIZE, ptep++, i++) { int r; -- 2.17.1