Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp474620pxb; Tue, 19 Oct 2021 06:46:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzCcOMJ24Hl2ixaE8ktt5CeUXhuk4a44bRSik9qltJswgb2WrSiS92aBfGWOPYFsM20oilm X-Received: by 2002:a17:90a:d996:: with SMTP id d22mr26779pjv.20.1634651216304; Tue, 19 Oct 2021 06:46:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634651216; cv=none; d=google.com; s=arc-20160816; b=By18o0gHR+YaQ25tmRyj5R0wyGmFjltICYsiaqav49O9xuZ/lTN3HfPwUh+ZF0lc6M 7v0kea6gVJHaOUYCekzWTFi2jTAooERfmfSlkag6q1+sBHpQqVU1d0J55vAQP7xm9eMS 058k1py9G1C2x9fMZT8HUFvqbWPsKnS+zJcQnMz+9lOq/tTcSUT/YlZqROg/q2nobUZp S9CAo5lkviw0Z6w1Mm92sBIkIzR/ZPMJUnMFslgvATtGnab7fnSWhoS/PgzUt3HQJa7s MW3M4ElOnkUgNizmLO75lN6v0Ii3FJ4rEGDA5UmjmBCjuMrv3XiD1rKBmUC2fnLzqC98 BLiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=v+3hg0pKfL9Cy31WPdpRjfXdCtrzMh6U+NcXMEM0QqI=; b=sU19NEMwiFrdP29xc6AHtF910WS25N+MoLtgDsVvrVmFlDc0EhsNoPkJ78bS0TxPP7 qjxS5ia5Qa1RKkEXlKIO39/5dsmAaxrGe/Xs324fCrJcOXXO9ZxaCyr+btcn4X+keb0i vzfsF18C3EidEfFmGtmYtcAOF6Krz5fUEoLkkjCebQC/y+QHr/gtNyVaXTPPR26MtEiD 8zxtqUa+GthsCgq//64UZD1x2WeVoHzvv407Jqy6SmWxCdqLqOC0mk6GK1iyieQ7XWLq o9DRmSQsRjQnlPmeDqmsHPVbVcmQYI573QRGLKDZTuSJbNNX7x2MOaCNwEsiA6l/AuzI 68VQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GrJ3CHG1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v4si17347001plo.132.2021.10.19.06.46.43; Tue, 19 Oct 2021 06:46:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GrJ3CHG1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235957AbhJSNrx (ORCPT + 99 others); Tue, 19 Oct 2021 09:47:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:27952 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235774AbhJSNrw (ORCPT ); Tue, 19 Oct 2021 09:47:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634651139; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v+3hg0pKfL9Cy31WPdpRjfXdCtrzMh6U+NcXMEM0QqI=; b=GrJ3CHG1oJDWtD47NkFwZhNotUJJQhmGqJU0zvr4VLM2nnnHJrJU8qLiG3LIAodcBTgyN2 HJjbe++HtF/hhKNkrGN9FCsr/SmBdG37Op0NNHdJkeFdjyeaGqtC3kMIeKllSgPNuqgDev jJlNcSfDP/Sr0YWGQnJWdG8UjMX/D/s= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-494-Um08QACUNiGeVS-nnN4VaA-1; Tue, 19 Oct 2021 09:45:36 -0400 X-MC-Unique: Um08QACUNiGeVS-nnN4VaA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D97E58735C6; Tue, 19 Oct 2021 13:45:34 +0000 (UTC) Received: from max.com (unknown [10.40.193.143]) by smtp.corp.redhat.com (Postfix) with ESMTP id A172010016FC; Tue, 19 Oct 2021 13:45:05 +0000 (UTC) From: Andreas Gruenbacher To: Linus Torvalds , Catalin Marinas Cc: Alexander Viro , Christoph Hellwig , "Darrick J. Wong" , Paul Mackerras , Jan Kara , Matthew Wilcox , cluster-devel@redhat.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, ocfs2-devel@oss.oracle.com, kvm-ppc@vger.kernel.org, linux-btrfs@vger.kernel.org, Andreas Gruenbacher Subject: [PATCH v8 15/17] gup: Introduce FOLL_NOFAULT flag to disable page faults Date: Tue, 19 Oct 2021 15:42:02 +0200 Message-Id: <20211019134204.3382645-16-agruenba@redhat.com> In-Reply-To: <20211019134204.3382645-1-agruenba@redhat.com> References: <20211019134204.3382645-1-agruenba@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Introduce a new FOLL_NOFAULT flag that causes get_user_pages to return -EFAULT when it would otherwise trigger a page fault. This is roughly similar to FOLL_FAST_ONLY but available on all architectures, and less fragile. Signed-off-by: Andreas Gruenbacher --- include/linux/mm.h | 3 ++- mm/gup.c | 4 +++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..2f0e6b9f8f3b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2851,7 +2851,8 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ #define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO * and return without waiting upon it */ -#define FOLL_POPULATE 0x40 /* fault in page */ +#define FOLL_POPULATE 0x40 /* fault in pages (with FOLL_MLOCK) */ +#define FOLL_NOFAULT 0x80 /* do not fault in pages */ #define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ #define FOLL_NUMA 0x200 /* force NUMA hinting page fault */ #define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ diff --git a/mm/gup.c b/mm/gup.c index 614b8536b3b6..6ec8f5494424 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -918,6 +918,8 @@ static int faultin_page(struct vm_area_struct *vma, /* mlock all present pages, but do not fault in new pages */ if ((*flags & (FOLL_POPULATE | FOLL_MLOCK)) == FOLL_MLOCK) return -ENOENT; + if (*flags & FOLL_NOFAULT) + return -EFAULT; if (*flags & FOLL_WRITE) fault_flags |= FAULT_FLAG_WRITE; if (*flags & FOLL_REMOTE) @@ -2843,7 +2845,7 @@ static int internal_get_user_pages_fast(unsigned long start, if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | FOLL_FORCE | FOLL_PIN | FOLL_GET | - FOLL_FAST_ONLY))) + FOLL_FAST_ONLY | FOLL_NOFAULT))) return -EINVAL; if (gup_flags & FOLL_PIN) -- 2.26.3