Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp3669099pxb; Tue, 19 Apr 2022 07:36:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw9vAZI/I21FVB0nEPu13zM1HxQpoy3DJK/LMrmvc3OZZyBFLTUzzYhmtvwtlXpjTHM+Sv8 X-Received: by 2002:a05:6402:2394:b0:423:db9f:a4b with SMTP id j20-20020a056402239400b00423db9f0a4bmr13374914eda.18.1650378994221; Tue, 19 Apr 2022 07:36:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650378994; cv=none; d=google.com; s=arc-20160816; b=AqlCcKDxxtTE/q4TB2u/Z7oXD7hQ5KUyzbrfU3BwhzE/uawBkdQoYEce0ZtCtesDTP ZVLTIsZGEGTrKc2Mwh0ALyOUW8mCNd5p9MpYZ97rT2kPB+dm1tigIS6P1cT8m9AnXOtH wlPho4CS/yqyzlXdSO924WUZ9FzUi8CXyB9mYM9UQsrXJRDPabzOFzZRv5kcNvN02Mzs H5WJLi4sXXGBFuvoPgvGB/585pUQDYwuKYNgJzPeXDQ69FoENNU/8wPNHJDXTl2yLUYb NJbsyJxsY8gus6RnM8ahU/I0rdk+zQzH1l83UVcnNHIx9h2kkXeCGOfBZdYB5KPWqtbB bUTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=e9K+c8KurdcfHqYXiT1kKBr6+jy1H2OrW4MeMfuFta8=; b=BixyC0DRqI3pkv2tPlSQSIoYgmZfkU4DalrckmwQrs1VsBUA20+tSB8tzorNhnMqFI YmWLD0vfvj2kTMBZ8LjqpmxntV5zxLoJ+LfZ1vzwKS/ihyjOwEcOHlOp/zQuPSirdqLZ b/4hZxIoLbbqYYX9rrS0IXXBiIm+sY1IkrhmBdcT2n2QitAEhyqNLFqPMzuplVSbDK+y iK4pP1kFlrGCkc0SlKsdmQ/bqe/6s1tGhYCVcHVLicMwHCUj4+F2fuk5qV8ZjA0naRWn mxT1z5O070ILUjDyv+bvJSvp9IJg+0/x9fJFjlQvItWVulNUMgGhz4sTLr5cHci0xyv+ GBKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="C0PiYA/y"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ia27-20020a170907a07b00b006e0332e32f8si7996101ejc.456.2022.04.19.07.36.09; Tue, 19 Apr 2022 07:36:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="C0PiYA/y"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235233AbiDQWkl (ORCPT + 99 others); Sun, 17 Apr 2022 18:40:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235218AbiDQWk2 (ORCPT ); Sun, 17 Apr 2022 18:40:28 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0CA417A82 for ; Sun, 17 Apr 2022 15:37:46 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 283591F40E94 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650235065; bh=/K/9dcREecPMUsw0K2kYNPeisYWPe0Wev+n2P9euta4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C0PiYA/yPZdY90tQTkowyuvPbGsijwT0byeRd+ubzCs4krGSPstwsrfWfzHn+QafK 1S9yRwz3bjdHYvyfCZbShUgD3ijQN2pZEnW6WEUHcGCJ7u3cInh/HhWCN54lcW+2lx LWJ+rUGHZQ+K2rjJwQkf5EU8+dVovgfsNv8zZ9e+IHtt9t5/ngfeafW9coALpFdcvh Wh1sB9ky0TFoOBtW2PKzfB/r7JMTkdDJ93N8SJGzwWDvBXcJbOPvoZ7CGDho9kpIHk uUa15BpsMkeU368PA7FsYFzrJmChgUQ4zj6aOywMHH7pRlbQybR3CPI9faDakE6I6X nFTj9afBy19BA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko Subject: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Date: Mon, 18 Apr 2022 01:37:02 +0300 Message-Id: <20220417223707.157113-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220417223707.157113-1-dmitry.osipenko@collabora.com> References: <20220417223707.157113-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_PASS,SPF_PASS, T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Replace drm_gem_shmem locks with the reservation lock to make GEM lockings more consistent. Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were protected by separate locks, now it's the same lock, but it doesn't make any difference for the current GEM SHMEM users. Only Panfrost and Lima drivers use vmap() and they do it in the slow code paths, hence there was no practical justification for the usage of separate lock in the vmap(). Suggested-by: Daniel Vetter Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 38 ++++++++++++------------- drivers/gpu/drm/lima/lima_gem.c | 8 +++--- drivers/gpu/drm/panfrost/panfrost_mmu.c | 15 ++++++---- include/drm/drm_gem_shmem_helper.h | 10 ------- 4 files changed, 31 insertions(+), 40 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 30ee46348a99..3ecef571eff3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -86,8 +86,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - mutex_init(&shmem->pages_lock); - mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); if (!private) { @@ -157,8 +155,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->pages_use_count); drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); @@ -209,11 +205,11 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->base.import_attach); - ret = mutex_lock_interruptible(&shmem->pages_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; ret = drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -248,9 +244,9 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -310,7 +306,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, } else { pgprot_t prot = PAGE_KERNEL; - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) goto err_zero_use; @@ -360,11 +356,11 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, { int ret; - ret = mutex_lock_interruptible(&shmem->vmap_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; ret = drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -385,7 +381,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } shmem->vaddr = NULL; @@ -406,9 +402,11 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct iosys_map *map) { - mutex_lock(&shmem->vmap_lock); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); + dma_resv_unlock(shmem->base.resv); + + drm_gem_shmem_update_purgeable_status(shmem); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); @@ -442,14 +440,14 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return (madv >= 0); } @@ -487,10 +485,10 @@ EXPORT_SYMBOL(drm_gem_shmem_purge_locked); bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) return false; drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return true; } @@ -549,7 +547,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (page_offset >= num_pages || WARN_ON_ONCE(!shmem->pages) || @@ -561,7 +559,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 0f1ca0b0db49..5008f0c2428f 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) new_size = min(new_size, bo->base.base.size); - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); if (bo->base.pages) { pages = bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) struct page *page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] = page; } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d3f82b26a631..404b8f67e2df 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -424,6 +424,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -446,13 +447,15 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - mutex_lock(&bo->base.pages_lock); + obj = &bo->base.base; + + dma_resv_lock(obj->resv, NULL); if (!bo->base.pages) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = -ENOMEM; goto err_bo; } @@ -462,7 +465,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts = NULL; - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = -ENOMEM; goto err_bo; } @@ -472,7 +475,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, pages = bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); goto out; } } @@ -483,13 +486,13 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = PTR_ERR(pages[i]); goto err_pages; } } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret = sg_alloc_table_from_pages(sgt, pages + page_offset, diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index d0a57853c188..70889533962a 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ @@ -79,11 +74,6 @@ struct drm_gem_shmem_object { */ struct sg_table *sgt; - /** - * @vmap_lock: Protects the vmap address and use count - */ - struct mutex vmap_lock; - /** * @vaddr: Kernel virtual address of the backing memory */ -- 2.35.1