Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp209262yba; Mon, 1 Apr 2019 05:02:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqy70QQPGPiCeKxZrbFM3nc6Qhp76WggYD9M7KvRy93GclSJFaPYNVw/VXbLxH6CdFBApLFs X-Received: by 2002:a63:e915:: with SMTP id i21mr60706133pgh.297.1554120174427; Mon, 01 Apr 2019 05:02:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554120174; cv=none; d=google.com; s=arc-20160816; b=uh3o4JZ+WUoRKonPhf6FXUNOdcJRcc26MlRo/mCHkNfh95Q/BV7lW5HJgrY13+6KJ9 htTG3SAjGX2RcrzHyvs09uQJbZMWzYAYWQiyUMZyHnh+PQ3LAar4t8J0ylmmS9DrxjAY DgaQE2RsjJUz1n9YY9oVYfmM4z1WAKmJDCHz59QUWI6T/blq1ZE1ToEPfFO7iNdChYkK 7YwMbp+Xt9r5KUl8VEoVt1d6+r7nUqQV0FnosXjQFNwwvH6tWcSQhrO5FM03FXMFUzg+ +MCBnFgpWVb2PVdXQ4EUfakw3fihmg98Lch7nexVVsQIV1iWcw+/n9VnpgdiHMJJOGOH lb4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=lONlQLaEvJFultSO5a8NUBSr+5P5+0uVNlPDaqTG73w=; b=iHjOUcObtw0v0cWhuKOLEQ/smhWOhivnOFtZThHgCN9O6b9+7P+nhjA2a8QtS1Nuyy Z5PN2RWYOyWrxfrNo2G9wljQTlaWf1W/RrW+ZmUgxUTgUSGll8drwmeHWVvtiSB+TNKS EHiXwo79/TNPO+ZWdCjM/ro8VkMgcHTuPMsaOCek5MIBx8Dnlc18LE1EbpsLWM/T38aK AEVD2KMYswjIvcAnKHiDhdyOvlMev/y/3O3ASJAcPNLjrjKyw1NrT88CX88xdFtTBe28 vYylVSUiw6NrkDEJxmKzWuhMEK4K/AhEyYDlCC2njylkzZ0NkZavCyd+ct4S2eKvKB72 X/Ww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=SIlQwFHQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 59si8873594plc.84.2019.04.01.05.02.37; Mon, 01 Apr 2019 05:02:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=SIlQwFHQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726711AbfDAMAK (ORCPT + 99 others); Mon, 1 Apr 2019 08:00:10 -0400 Received: from mail-lj1-f195.google.com ([209.85.208.195]:38419 "EHLO mail-lj1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726544AbfDAMAK (ORCPT ); Mon, 1 Apr 2019 08:00:10 -0400 Received: by mail-lj1-f195.google.com with SMTP id p14so7855520ljg.5 for ; Mon, 01 Apr 2019 05:00:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=lONlQLaEvJFultSO5a8NUBSr+5P5+0uVNlPDaqTG73w=; b=SIlQwFHQZvwbKWh1gXNbMAsQCDe1RPYDmv5JQpb1qXgh8tAieDYIZ+KG/wLs9zRAib XNvzW8s6DNMBFTOoWj8OqJ6DEU7ITmcMlutzyb6Ry8nMSPKjwzw4MNknUjzccjnHBVmO +UTDFo3HyyDKCT9gVwNPw9cUwrRe7qWUIQdpO3OY7FUJevtI4xHv9Q7F3zC7bMzwDBgj e66i2SMdqseC83T6UbuvODLerrR2i9/4+uvB5eNtzjtvGn4qr5ftNn52mCB0xCn4MLnU u/zIpjh0PzmI1wNgduTyNqLOiAr721XBGSLzFYcZUNZEhM2SUsTENqWNW2etB25/iFoJ h4tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=lONlQLaEvJFultSO5a8NUBSr+5P5+0uVNlPDaqTG73w=; b=d9Y6kpWMHyAMwaBim5fzuiht/xT1ncO3uu18X9Ie2hlQMciqVTHdHENs0fu5z3CTpQ xkeoIA5wn1LOGN1JcQ/jAZtm7bwPFNe2gkQAqhFKmZAZ5kpmpo2YwYwHJIRdECAfH3CM qj1MfWQpAUNOE7z2UyMtvgPkSf4DmxLfNA7oeSeTyvhecj6sfyjEAl9LTBLVwOumHvYu E0no6gmkEIo0ot5OXOfNBwODE62PEtWB8JPbMqJ1FfdS+djBv8sw7xmNTTY9BzvLhz0f W6TQ7PVfv82DzYHBcBRvbpsX2KYzZJir1/Le9iKtwsLjdfZVBBp0X/W7Cny60JofaS5b H90Q== X-Gm-Message-State: APjAAAWA+QCniVELTVa3VGm8TZRvT+cE0UPHBGWBopVGwyKBRUnqt+Zo tSQpQmxY99gTwfIkE+Jj0gyhJp+t87diK+qAvLw= X-Received: by 2002:a2e:8888:: with SMTP id k8mr15485928lji.43.1554120007575; Mon, 01 Apr 2019 05:00:07 -0700 (PDT) MIME-Version: 1.0 References: <20190325144011.10560-1-jglisse@redhat.com> <20190325144011.10560-12-jglisse@redhat.com> In-Reply-To: <20190325144011.10560-12-jglisse@redhat.com> From: Souptick Joarder Date: Mon, 1 Apr 2019 17:29:54 +0530 Message-ID: Subject: Re: [PATCH v2 11/11] mm/hmm: add an helper function that fault pages and map them to a device v2 To: jglisse@redhat.com Cc: Linux-MM , linux-kernel@vger.kernel.org, Andrew Morton , Ralph Campbell , John Hubbard , Dan Williams Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 25, 2019 at 8:11 PM wrote: > > From: J=C3=A9r=C3=B4me Glisse > > This is a all in one helper that fault pages in a range and map them to > a device so that every single device driver do not have to re-implement > this common pattern. > > This is taken from ODP RDMA in preparation of ODP RDMA convertion. It > will be use by nouveau and other drivers. > > Changes since v1: > - improved commit message > > Signed-off-by: J=C3=A9r=C3=B4me Glisse > Cc: Andrew Morton > Cc: Ralph Campbell > Cc: John Hubbard > Cc: Dan Williams > --- > include/linux/hmm.h | 9 +++ > mm/hmm.c | 152 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 161 insertions(+) > > diff --git a/include/linux/hmm.h b/include/linux/hmm.h > index 5f9deaeb9d77..7aadf18b29cb 100644 > --- a/include/linux/hmm.h > +++ b/include/linux/hmm.h > @@ -568,6 +568,15 @@ int hmm_range_register(struct hmm_range *range, > void hmm_range_unregister(struct hmm_range *range); > long hmm_range_snapshot(struct hmm_range *range); > long hmm_range_fault(struct hmm_range *range, bool block); > +long hmm_range_dma_map(struct hmm_range *range, > + struct device *device, > + dma_addr_t *daddrs, > + bool block); > +long hmm_range_dma_unmap(struct hmm_range *range, > + struct vm_area_struct *vma, > + struct device *device, > + dma_addr_t *daddrs, > + bool dirty); > > /* > * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a r= ange > diff --git a/mm/hmm.c b/mm/hmm.c > index ce33151c6832..fd143251b157 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -30,6 +30,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -1163,6 +1164,157 @@ long hmm_range_fault(struct hmm_range *range, boo= l block) > return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT; > } > EXPORT_SYMBOL(hmm_range_fault); > + > +/* Adding extra * might be helpful here for documentation. > + * hmm_range_dma_map() - hmm_range_fault() and dma map page all in one. > + * @range: range being faulted > + * @device: device against to dma map page to > + * @daddrs: dma address of mapped pages > + * @block: allow blocking on fault (if true it sleeps and do not drop mm= ap_sem) > + * Returns: number of pages mapped on success, -EAGAIN if mmap_sem have = been > + * drop and you need to try again, some other error value other= wise > + * > + * Note same usage pattern as hmm_range_fault(). > + */ > +long hmm_range_dma_map(struct hmm_range *range, > + struct device *device, > + dma_addr_t *daddrs, > + bool block) > +{ > + unsigned long i, npages, mapped; > + long ret; > + > + ret =3D hmm_range_fault(range, block); > + if (ret <=3D 0) > + return ret ? ret : -EBUSY; > + > + npages =3D (range->end - range->start) >> PAGE_SHIFT; > + for (i =3D 0, mapped =3D 0; i < npages; ++i) { > + enum dma_data_direction dir =3D DMA_FROM_DEVICE; > + struct page *page; > + > + /* > + * FIXME need to update DMA API to provide invalid DMA ad= dress > + * value instead of a function to test dma address value.= This > + * would remove lot of dumb code duplicated accross many = arch. > + * > + * For now setting it to 0 here is good enough as the pfn= s[] > + * value is what is use to check what is valid and what i= sn't. > + */ > + daddrs[i] =3D 0; > + > + page =3D hmm_pfn_to_page(range, range->pfns[i]); > + if (page =3D=3D NULL) > + continue; > + > + /* Check if range is being invalidated */ > + if (!range->valid) { > + ret =3D -EBUSY; > + goto unmap; > + } > + > + /* If it is read and write than map bi-directional. */ > + if (range->pfns[i] & range->values[HMM_PFN_WRITE]) > + dir =3D DMA_BIDIRECTIONAL; > + > + daddrs[i] =3D dma_map_page(device, page, 0, PAGE_SIZE, di= r); > + if (dma_mapping_error(device, daddrs[i])) { > + ret =3D -EFAULT; > + goto unmap; > + } > + > + mapped++; > + } > + > + return mapped; > + > +unmap: > + for (npages =3D i, i =3D 0; (i < npages) && mapped; ++i) { > + enum dma_data_direction dir =3D DMA_FROM_DEVICE; > + struct page *page; > + > + page =3D hmm_pfn_to_page(range, range->pfns[i]); > + if (page =3D=3D NULL) > + continue; > + > + if (dma_mapping_error(device, daddrs[i])) > + continue; > + > + /* If it is read and write than map bi-directional. */ > + if (range->pfns[i] & range->values[HMM_PFN_WRITE]) > + dir =3D DMA_BIDIRECTIONAL; > + > + dma_unmap_page(device, daddrs[i], PAGE_SIZE, dir); > + mapped--; > + } > + > + return ret; > +} > +EXPORT_SYMBOL(hmm_range_dma_map); > + > +/* Same here. > + * hmm_range_dma_unmap() - unmap range of that was map with hmm_range_dm= a_map() > + * @range: range being unmapped > + * @vma: the vma against which the range (optional) > + * @device: device against which dma map was done > + * @daddrs: dma address of mapped pages > + * @dirty: dirty page if it had the write flag set > + * Returns: number of page unmapped on success, -EINVAL otherwise > + * > + * Note that caller MUST abide by mmu notifier or use HMM mirror and abi= de > + * to the sync_cpu_device_pagetables() callback so that it is safe here = to > + * call set_page_dirty(). Caller must also take appropriate locks to avo= id > + * concurrent mmu notifier or sync_cpu_device_pagetables() to make progr= ess. > + */ > +long hmm_range_dma_unmap(struct hmm_range *range, > + struct vm_area_struct *vma, > + struct device *device, > + dma_addr_t *daddrs, > + bool dirty) > +{ > + unsigned long i, npages; > + long cpages =3D 0; > + > + /* Sanity check. */ > + if (range->end <=3D range->start) > + return -EINVAL; > + if (!daddrs) > + return -EINVAL; > + if (!range->pfns) > + return -EINVAL; > + > + npages =3D (range->end - range->start) >> PAGE_SHIFT; > + for (i =3D 0; i < npages; ++i) { > + enum dma_data_direction dir =3D DMA_FROM_DEVICE; > + struct page *page; > + > + page =3D hmm_pfn_to_page(range, range->pfns[i]); > + if (page =3D=3D NULL) > + continue; > + > + /* If it is read and write than map bi-directional. */ > + if (range->pfns[i] & range->values[HMM_PFN_WRITE]) { > + dir =3D DMA_BIDIRECTIONAL; > + > + /* > + * See comments in function description on why it= is > + * safe here to call set_page_dirty() > + */ > + if (dirty) > + set_page_dirty(page); > + } > + > + /* Unmap and clear pfns/dma address */ > + dma_unmap_page(device, daddrs[i], PAGE_SIZE, dir); > + range->pfns[i] =3D range->values[HMM_PFN_NONE]; > + /* FIXME see comments in hmm_vma_dma_map() */ > + daddrs[i] =3D 0; > + cpages++; > + } > + > + return cpages; > +} > +EXPORT_SYMBOL(hmm_range_dma_unmap); > #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ > > > -- > 2.17.2 >