Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp6443515imu; Wed, 30 Jan 2019 15:02:11 -0800 (PST) X-Google-Smtp-Source: ALg8bN5ycBgHCbp4uJh9V/1nWC37bbNaK1PHyLmGUJ9ZJneerBLa+Q1SP3ff8pVsJ5EMCaiTRDiZ X-Received: by 2002:a17:902:9692:: with SMTP id n18mr32901368plp.333.1548889331456; Wed, 30 Jan 2019 15:02:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548889331; cv=none; d=google.com; s=arc-20160816; b=C749PgBllg8x5hIz8WNCI49jfgyFYU+PRk2gNJf/hWqKTmD/5w+cBn73Gchy7TFifZ c3qoIRwyVJLQCNe+n+7Upb7f+NOh6Pc2AP5T/5lDl5XAdidIKRhZVcXUa2R31rzDdZes M4p5D5b01vS2nJ6+KKYuq6nIk2Vleq/bEQrQFJWdTAs2GuVfMeSN4IF2EwXkHpoEvZmL 4Ex2uPQztYDQ1sNw2+wB9GMkQcmwivoJPnbyQgUygKJ3nuehzRRDMUdVdQX799APcRVG iM1wIuhLGSlphGFMvwKdKPNO67Yg6P6pEz9N2XHypaqNMazO0XVrESxKY56eCaGEcYA1 E9yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=SslJD5HCZnE33m8/+nrfokiGSZS9ZeZ9Mbb5q5yPKwc=; b=LAAhT849hfB78yLfaGO1cKdjDR6uc4ZTHBcwj4orN6VoG5ijKizUOkARkfqyxdSZI1 MH4U+lpuZJP7sybf6FL7Ju5KYuKDwdLgoUHh9wVoGteZfJj9Jh6j3u3YUSE/viNuq14h bS9D3ZlDu3MU2dBC0IsZoJaC068u60OKlLJ9qFiHp52O/1fxfX0Ywce2aysYS9Lo2TNC nNI/Y0K9VM963JRba/KYyT1cqLJqAX3zYIoBPLUKBw8vdmbpB2XnYoDGmOiOS3FECwOE Pi375k2nuNprBlEQ15wLN8tWPe936qcpxNiKygBstndrGvoiORixlgS6ZJlgSt6AjBzW n1nA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m78si2711201pfj.48.2019.01.30.15.01.55; Wed, 30 Jan 2019 15:02:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732155AbfA3W67 (ORCPT + 99 others); Wed, 30 Jan 2019 17:58:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:53104 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729486AbfA3W64 (ORCPT ); Wed, 30 Jan 2019 17:58:56 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 80C3086671; Wed, 30 Jan 2019 22:58:55 +0000 (UTC) Received: from redhat.com (ovpn-126-0.rdu2.redhat.com [10.10.126.0]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EACAF608E5; Wed, 30 Jan 2019 22:58:51 +0000 (UTC) Date: Wed, 30 Jan 2019 17:58:50 -0500 From: Jerome Glisse To: Jason Gunthorpe Cc: Logan Gunthorpe , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Greg Kroah-Hartman , "Rafael J . Wysocki" , Bjorn Helgaas , Christian Koenig , Felix Kuehling , "linux-pci@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , "iommu@lists.linux-foundation.org" Subject: Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma Message-ID: <20190130225849.GJ5061@redhat.com> References: <20190130201114.GB17915@mellanox.com> <20190130204332.GF5061@redhat.com> <20190130204954.GI17080@mellanox.com> <20190130214525.GG5061@redhat.com> <20190130215600.GM17080@mellanox.com> <20190130223027.GH5061@redhat.com> <20190130223258.GB25486@mellanox.com> <20190130224705.GI5061@redhat.com> <20190130225148.GC25486@mellanox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20190130225148.GC25486@mellanox.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 30 Jan 2019 22:58:55 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 30, 2019 at 10:51:55PM +0000, Jason Gunthorpe wrote: > On Wed, Jan 30, 2019 at 05:47:05PM -0500, Jerome Glisse wrote: > > On Wed, Jan 30, 2019 at 10:33:04PM +0000, Jason Gunthorpe wrote: > > > On Wed, Jan 30, 2019 at 05:30:27PM -0500, Jerome Glisse wrote: > > > > > > > > What is the problem in the HMM mirror that it needs this restriction? > > > > > > > > No restriction at all here. I think i just wasn't understood. > > > > > > Are you are talking about from the exporting side - where the thing > > > creating the VMA can really only put one distinct object into it? > > > > The message i was trying to get accross is that HMM mirror will > > always succeed for everything* except for special vma ie mmap of > > device file. For those it can only succeed if a p2p_map() call > > succeed. > > > > So any user of HMM mirror might to know why the mirroring fail ie > > was it because something exceptional is happening ? Or is it because > > i was trying to map a special vma which can be forbiden. > > > > Hence why i assume that you might want to know about such p2p_map > > failure at the time you create the umem odp object as it might be > > some failure you might want to report differently and handle > > differently. If you do not care about differentiating OOM or > > exceptional failure from p2p_map failure than you have nothing to > > worry about you will get the same error from HMM for both. > > I think my hope here was that we could have some kind of 'trial' > interface where very eary users can call > 'hmm_mirror_is_maybe_supported(dev, user_ptr, len)' and get a failure > indication. > > We probably wouldn't call this on the full address space though Yes we can do special wrapper around the general case that allow caller to differentiate failure. So at creation you call the special flavor and get proper distinction between error. Afterward during normal operation any failure is just treated in a same way no matter what is the reasons (munmap, mremap, mprotect, ...). > Beyond that it is just inevitable there can be problems faulting if > the memory map is messed with after MR is created. > > And here again, I don't want to worry about any particular VMA > boundaries.. You do not have to worry about boundaries HMM will return -EFAULT if there is no valid vma behind the address you are trying to map (or if the vma prot does not allow you to access it). So then you can handle that failure just like you do now and as my ODP HMM patch preserve. Cheers, J?r?me