Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp6490630imu; Mon, 21 Jan 2019 09:46:34 -0800 (PST) X-Google-Smtp-Source: ALg8bN7C3g7UzJ2LPaENeHbQ/x1ROh0jTSeJUWSWLjVt83bKZ9Dwf6F9hpl5AvelYcr1KvfbIy2V X-Received: by 2002:a63:101:: with SMTP id 1mr28987604pgb.152.1548092794476; Mon, 21 Jan 2019 09:46:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548092794; cv=none; d=google.com; s=arc-20160816; b=YUPu5xKN8knOGWpp7zwESk5HB6GY+2J2yCLuBg4MkKDi/bRWRwn361luHH4NhvrV7x avwLgFwltIgt1LIh0sJJBKL2WoJQcUuvIwq4rngVMUu6ncHfmdHUD4AkjDmTpr9Ch19d ZIjPyWl5dcGtO9C2vawgfqncdpHT5uBs2xy8kZAUvd0WuEQ1jySx70zRJ32Hq1PL5L0o s3hI1EyuHlG0/HnTX1GUCs7gQI+CwP2pkBiQw6FWa0eM+nhG6To88a/CAcWpQDVm4QdI Qwm6N4i8pyccxu25pTRrxf6YngnySPu2M+Xxv44/iatKwFE/8CE0VbczP6/oAjMdZVx6 gN+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=waI1WmmPLoTveHPlnnfdMx1fmeBn1F1egAaxgnJemTY=; b=h5BhYY4aTt3IlJdLMEy8puuzOhlU3l7neQmBc0+3KvpwOka/seGo3ggLZXOfSG6d13 PyQQJ/sXldqXiK3AedZ33IyUQItd61sPcZjbvowiBIBBqpJmT3iGrzPXVbPnhQZgTaf2 +XisYXWagaF2wUhYgCSZgnL0MJ4bSd1owO+93IB56bnArPF2lEf9Wajg0C7Wx/OzU2I5 mmmcZIoNAg7kiqxhvdi3bvY3Vc/vYu/ooDE0/k7ZwK+OFzYVwdBrOJBWNRWcldHg/aXz LtbFnMavFrUt1mI+VhGebefTp6EwxdAMfJGuZZrBUjR3ZHIhb/iLdHt5mFrSXXLbRYdQ PTwA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 36si1031018pgt.213.2019.01.21.09.46.19; Mon, 21 Jan 2019 09:46:34 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728363AbfAURnR (ORCPT + 99 others); Mon, 21 Jan 2019 12:43:17 -0500 Received: from smtp2.provo.novell.com ([137.65.250.81]:39297 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726139AbfAURnQ (ORCPT ); Mon, 21 Jan 2019 12:43:16 -0500 Received: from localhost.localdomain (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by smtp2.provo.novell.com with ESMTP (TLS encrypted); Mon, 21 Jan 2019 10:43:07 -0700 From: Davidlohr Bueso To: akpm@linux-foundation.org Cc: dledford@redhat.com, jgg@mellanox.com, jack@suse.de, ira.weiny@intel.com, linux-rdma@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, dave@stgolabs.net, Davidlohr Bueso Subject: [PATCH 6/6] drivers/IB,core: reduce scope of mmap_sem Date: Mon, 21 Jan 2019 09:42:20 -0800 Message-Id: <20190121174220.10583-7-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190121174220.10583-1-dave@stgolabs.net> References: <20190121174220.10583-1-dave@stgolabs.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ib_umem_get() uses gup_longterm() and relies on the lock to stabilze the vma_list, so we cannot really get rid of mmap_sem altogether, but now that the counter is atomic, we can get of some complexity that mmap_sem brings with only pinned_vm. Reviewed-by: Ira Weiny Signed-off-by: Davidlohr Bueso --- drivers/infiniband/core/umem.c | 41 ++--------------------------------------- 1 file changed, 2 insertions(+), 39 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 678abe1afcba..b69d3efa8712 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -165,15 +165,12 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - down_write(&mm->mmap_sem); - new_pinned = atomic64_read(&mm->pinned_vm) + npages; + new_pinned = atomic64_add_return(npages, &mm->pinned_vm); if (new_pinned > lock_limit && !capable(CAP_IPC_LOCK)) { - up_write(&mm->mmap_sem); + atomic64_sub(npages, &mm->pinned_vm); ret = -ENOMEM; goto out; } - atomic64_set(&mm->pinned_vm, new_pinned); - up_write(&mm->mmap_sem); cur_base = addr & PAGE_MASK; @@ -233,9 +230,7 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, umem_release: __ib_umem_release(context->device, umem, 0); vma: - down_write(&mm->mmap_sem); atomic64_sub(ib_umem_num_pages(umem), &mm->pinned_vm); - up_write(&mm->mmap_sem); out: if (vma_list) free_page((unsigned long) vma_list); @@ -258,25 +253,12 @@ static void __ib_umem_release_tail(struct ib_umem *umem) kfree(umem); } -static void ib_umem_release_defer(struct work_struct *work) -{ - struct ib_umem *umem = container_of(work, struct ib_umem, work); - - down_write(&umem->owning_mm->mmap_sem); - atomic64_sub(ib_umem_num_pages(umem), &umem->owning_mm->pinned_vm); - up_write(&umem->owning_mm->mmap_sem); - - __ib_umem_release_tail(umem); -} - /** * ib_umem_release - release memory pinned with ib_umem_get * @umem: umem struct to release */ void ib_umem_release(struct ib_umem *umem) { - struct ib_ucontext *context = umem->context; - if (umem->is_odp) { ib_umem_odp_release(to_ib_umem_odp(umem)); __ib_umem_release_tail(umem); @@ -285,26 +267,7 @@ void ib_umem_release(struct ib_umem *umem) __ib_umem_release(umem->context->device, umem, 1); - /* - * We may be called with the mm's mmap_sem already held. This - * can happen when a userspace munmap() is the call that drops - * the last reference to our file and calls our release - * method. If there are memory regions to destroy, we'll end - * up here and not be able to take the mmap_sem. In that case - * we defer the vm_locked accounting a workqueue. - */ - if (context->closing) { - if (!down_write_trylock(&umem->owning_mm->mmap_sem)) { - INIT_WORK(&umem->work, ib_umem_release_defer); - queue_work(ib_wq, &umem->work); - return; - } - } else { - down_write(&umem->owning_mm->mmap_sem); - } atomic64_sub(ib_umem_num_pages(umem), &umem->owning_mm->pinned_vm); - up_write(&umem->owning_mm->mmap_sem); - __ib_umem_release_tail(umem); } EXPORT_SYMBOL(ib_umem_release); -- 2.16.4