Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755191AbdDNWAl (ORCPT ); Fri, 14 Apr 2017 18:00:41 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34376 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752089AbdDNWAj (ORCPT ); Fri, 14 Apr 2017 18:00:39 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 8D0DFC04B311 Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=alex.williamson@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 8D0DFC04B311 Date: Fri, 14 Apr 2017 16:00:32 -0600 From: Alex Williamson To: Kirti Wankhede Cc: , , , , Subject: Re: [PATCH v3] vfio/type1: Remove locked page accounting workqueue Message-ID: <20170414160032.58c4c359@t450s.home> In-Reply-To: <20170414145814.469ac17f@t450s.home> References: <20170411192644.2131.68702.stgit@gimli.home> <59aebab7-9d69-5ec3-6402-fb463de08dc3@nvidia.com> <20170414145814.469ac17f@t450s.home> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Fri, 14 Apr 2017 22:00:38 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7669 Lines: 224 On Fri, 14 Apr 2017 14:58:14 -0600 Alex Williamson wrote: > On Sat, 15 Apr 2017 00:51:28 +0530 > Kirti Wankhede wrote: > > > On 4/12/2017 12:58 AM, Alex Williamson wrote: > > > If the mmap_sem is contented then the vfio type1 IOMMU backend will > > > defer locked page accounting updates to a workqueue task. This has a > > > few problems and depending on which side the user tries to play, they > > > might be over-penalized for unmaps that haven't yet been accounted or > > > race the workqueue to enter more mappings than they're allowed. The > > > original intent of this workqueue mechanism seems to be focused on > > > reducing latency through the ioctl, but we cannot do so at the cost > > > of correctness. Remove this workqueue mechanism and update the > > > callers to allow for failure. We can also now recheck the limit under > > > write lock to make sure we don't exceed it. > > > > > > vfio_pin_pages_remote() also now necessarily includes an unwind path > > > which we can jump to directly if the consecutive page pinning finds > > > that we're exceeding the user's memory limits. This avoids the > > > current lazy approach which does accounting and mapping up to the > > > fault, only to return an error on the next iteration to unwind the > > > entire vfio_dma. > > > > > > Cc: stable@vger.kernel.org > > > Signed-off-by: Alex Williamson > > > --- > > > > > > v3: Update for comments from Peter > > > - Use task_rlimit() exclusively > > > - Discuss vfio_pin_pages_remote() exit branch in commitlog > > > > > > drivers/vfio/vfio_iommu_type1.c | 99 +++++++++++++++++---------------------- > > > 1 file changed, 44 insertions(+), 55 deletions(-) > > > > > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > > > index 32d2633092a3..176ebcc0ffa2 100644 > > > --- a/drivers/vfio/vfio_iommu_type1.c > > > +++ b/drivers/vfio/vfio_iommu_type1.c > > > @@ -246,69 +246,43 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn) > > > return ret; > > > } > > > > > > -struct vwork { > > > - struct mm_struct *mm; > > > - long npage; > > > - struct work_struct work; > > > -}; > > > - > > > -/* delayed decrement/increment for locked_vm */ > > > -static void vfio_lock_acct_bg(struct work_struct *work) > > > -{ > > > - struct vwork *vwork = container_of(work, struct vwork, work); > > > - struct mm_struct *mm; > > > - > > > - mm = vwork->mm; > > > - down_write(&mm->mmap_sem); > > > - mm->locked_vm += vwork->npage; > > > - up_write(&mm->mmap_sem); > > > - mmput(mm); > > > - kfree(vwork); > > > -} > > > - > > > -static void vfio_lock_acct(struct task_struct *task, long npage) > > > +static int vfio_lock_acct(struct task_struct *task, long npage) > > > { > > > - struct vwork *vwork; > > > struct mm_struct *mm; > > > bool is_current; > > > + int ret; > > > > > > if (!npage) > > > - return; > > > + return 0; > > > > > > is_current = (task->mm == current->mm); > > > > > > mm = is_current ? task->mm : get_task_mm(task); > > > if (!mm) > > > - return; /* process exited */ > > > + return -ESRCH; /* process exited */ > > > > > > - if (down_write_trylock(&mm->mmap_sem)) { > > > - mm->locked_vm += npage; > > > - up_write(&mm->mmap_sem); > > > - if (!is_current) > > > - mmput(mm); > > > - return; > > > - } > > > + ret = down_write_killable(&mm->mmap_sem); > > > + if (!ret) { > > > + if (npage < 0) { > > > + mm->locked_vm += npage; > > > + } else { > > > + unsigned long limit; > > > > > > - if (is_current) { > > > - mm = get_task_mm(task); > > > - if (!mm) > > > - return; > > > + limit = task_rlimit(task, RLIMIT_MEMLOCK) >> PAGE_SHIFT; > > > + > > > + if (mm->locked_vm + npage <= limit) > > > + mm->locked_vm += npage; > > > + else > > > + ret = -ENOMEM; > > > + } > > > + > > > > Sorry if I'm late here on my review. > > > > There are rlimit checks before calling vfio_lock_acct() while pinning > > pages. I agree this is checked holding locks, so this check is more > > robust, but still it feels redundant. I think you can remove checks from > > vfio_pin_page_external() and vfio_pin_pages_remote(). > > If we removed those pre-checks then a user/mdev vendor driver would be > able to pin massive amounts of memory, potentially causing a DoS on the > host (ex. trigger OOM), before we bother to test whether they really > have permission to do so. I think redundancy is better. Perhaps _external is overly redundant since it's only doing one page increments, _remote could go well into the above scenario w/o the redundancy. Thanks, Alex > > Also while checking the limit, !lock_cap checks is not considered here. > > That would mean that there code would impose limit check even without > > lock capability? > > That's a bug! Thanks, > > Alex > > > > + up_write(&mm->mmap_sem); > > > } > > > > > > - /* > > > - * Couldn't get mmap_sem lock, so must setup to update > > > - * mm->locked_vm later. If locked_vm were atomic, we > > > - * wouldn't need this silliness > > > - */ > > > - vwork = kmalloc(sizeof(struct vwork), GFP_KERNEL); > > > - if (WARN_ON(!vwork)) { > > > + if (!is_current) > > > mmput(mm); > > > - return; > > > - } > > > - INIT_WORK(&vwork->work, vfio_lock_acct_bg); > > > - vwork->mm = mm; > > > - vwork->npage = npage; > > > - schedule_work(&vwork->work); > > > + > > > + return ret; > > > } > > > > > > /* > > > @@ -405,7 +379,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr, > > > static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > > > long npage, unsigned long *pfn_base) > > > { > > > - unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; > > > + unsigned long pfn = 0, limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; > > > bool lock_cap = capable(CAP_IPC_LOCK); > > > long ret, pinned = 0, lock_acct = 0; > > > bool rsvd; > > > @@ -442,8 +416,6 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > > > /* Lock all the consecutive pages from pfn_base */ > > > for (vaddr += PAGE_SIZE, iova += PAGE_SIZE; pinned < npage; > > > pinned++, vaddr += PAGE_SIZE, iova += PAGE_SIZE) { > > > - unsigned long pfn = 0; > > > - > > > ret = vaddr_get_pfn(current->mm, vaddr, dma->prot, &pfn); > > > if (ret) > > > break; > > > @@ -460,14 +432,25 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > > > put_pfn(pfn, dma->prot); > > > pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", > > > __func__, limit << PAGE_SHIFT); > > > - break; > > > + ret = -ENOMEM; > > > + goto unpin_out; > > > } > > > lock_acct++; > > > } > > > } > > > > > > out: > > > - vfio_lock_acct(current, lock_acct); > > > + ret = vfio_lock_acct(current, lock_acct); > > > + > > > +unpin_out: > > > + if (ret) { > > > + if (!rsvd) { > > > + for (pfn = *pfn_base ; pinned ; pfn++, pinned--) > > > + put_pfn(pfn, dma->prot); > > > + } > > > + > > > + return ret; > > > + } > > > > > > return pinned; > > > } > > > @@ -522,8 +505,14 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr, > > > goto pin_page_exit; > > > } > > > > > > - if (!rsvd && do_accounting) > > > - vfio_lock_acct(dma->task, 1); > > > + if (!rsvd && do_accounting) { > > > + ret = vfio_lock_acct(dma->task, 1); > > > + if (ret) { > > > + put_pfn(*pfn_base, dma->prot); > > > + goto pin_page_exit; > > > + } > > > + } > > > + > > > ret = 1; > > > > > > pin_page_exit: > > > >