Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp342060pxk; Thu, 24 Sep 2020 07:01:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz3KJ6ybgHqekHOwtCz8/v03bxSezyralFn/lJ1+jDFMLOwpG06Q9Yt0EnzPSJTJB1pAiby X-Received: by 2002:aa7:c70a:: with SMTP id i10mr41584edq.218.1600956065014; Thu, 24 Sep 2020 07:01:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600956065; cv=none; d=google.com; s=arc-20160816; b=EQa4nOZeS7vJBHxVGxon1D2ifgkgrYXjJqcoZzDMQ3EZ2g96IhnXWkDPocdQ+zlD69 KlSizYQIiqRA8ce/jH05ul1RpCRXOYM6fgV8C+7++8z0u6/UKZLhGkpiHq00uA9idsfb 6C/umxfZaaj0q4nbN+Z3nT4nnQTNYs5TNNxVXrEKC0g+kLpsBfQlPXGJsXYtrDPORp2V tS4FwGjWOtU7hSUya/fAq/3lgsR/kSj69MiPh0ZYshpzI2aOkobV7r2DsDW7g4OmrOB3 CyrT/2X6suSJGAZxHWxurAKSAwJgmCcQzOo2NfVtAlYGH+4O3uOEG6TyXqA4ePF9JOwP TM6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=egAh0AyaJU/L8zB++A7y8+b+g9+HnktXyv/oSuuetso=; b=SoZcma3af18oyw2fsxjMif3Fccr2nBxOu1z7TnpryPO7F3c8h5mQTVDTs6pmRs7pzq aXeWItS/nv4ZRX/XPIQAQkU/ItFRar1860UW8uNqKNyzu3oZmqcsnVLC0VsBZhM7A2LI +dtmxyFlvnFjTvzZh3iruTeMq2kcOgHOTj5I9G6kvBEPFWWQ1StvBaYmIms5XaetzRi7 LPSIEkqZQgtt8S8ReXLXM8fhN2jnkbPJBp9+pZH6OZAyfrM3K8CctZsONd9vbVsRpXD7 PRfwg7TPpvgC3G4+NYIc3O2py7aZTtPi8Wp0SIJkpZ0RmoENsjEFzcXbJm0wnwW8M8PV /b5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=u8zXv86J; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id kt7si1973731ejb.282.2020.09.24.07.00.32; Thu, 24 Sep 2020 07:01:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=u8zXv86J; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728150AbgIXN7T (ORCPT + 99 others); Thu, 24 Sep 2020 09:59:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728149AbgIXN7S (ORCPT ); Thu, 24 Sep 2020 09:59:18 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73871C0613D4 for ; Thu, 24 Sep 2020 06:59:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=egAh0AyaJU/L8zB++A7y8+b+g9+HnktXyv/oSuuetso=; b=u8zXv86Jo0uvHG04wra1Ey2FC5 4BmGm5rwILkRlXWjNo27W+sYj1vETLRMbiA4/fWiPqk4byxXZWqRE685R4RRF+w0dgExTfCbVtFh5 +2RhGbduILeabm6vLx9zVmYo43Px3DA0P2YOCwRFFtS05jzHEf6zcWNFmZOTxI+un96fpjKjEkGLZ Z7UQLa0aAVYhycLBRUmZmcNmFbr9EddfkWvsChn0SzDWwK28fb1uUfnti7yy3BPrKuF2lDlN6TeYr OjqPvwDRq77s0mmrlrGQup9jWQSEGcRmoDlWfevvV7pax80ghmy8o7JsqBu4tzzebz8Q3owfyJLuS oG0ke2LA==; Received: from p4fdb0c34.dip0.t-ipconnect.de ([79.219.12.52] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kLRm2-0003vy-8W; Thu, 24 Sep 2020 13:59:02 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Chris Wilson , Matthew Auld , Rodrigo Vivi , Minchan Kim , Matthew Wilcox , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 06/11] drm/i915: use vmap in shmem_pin_map Date: Thu, 24 Sep 2020 15:58:48 +0200 Message-Id: <20200924135853.875294-7-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200924135853.875294-1-hch@lst.de> References: <20200924135853.875294-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org shmem_pin_map somewhat awkwardly reimplements vmap using alloc_vm_area and manual pte setup. The only practical difference is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't seem to be required here (and could be added to vmap using a flag if actually required). Switch to use vmap, and use vfree to free both the vmalloc mapping and the page array, as well as dropping the references to each page. Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/gt/shmem_utils.c | 76 +++++++-------------------- 1 file changed, 18 insertions(+), 58 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 43c7acbdc79dea..f011ea42487e11 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -49,80 +49,40 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) return file; } -static size_t shmem_npte(struct file *file) -{ - return file->f_mapping->host->i_size >> PAGE_SHIFT; -} - -static void __shmem_unpin_map(struct file *file, void *ptr, size_t n_pte) -{ - unsigned long pfn; - - vunmap(ptr); - - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (!WARN_ON(IS_ERR(page))) { - put_page(page); - put_page(page); - } - } -} - void *shmem_pin_map(struct file *file) { - const size_t n_pte = shmem_npte(file); - pte_t *stack[32], **ptes, **mem; - struct vm_struct *area; - unsigned long pfn; - - mem = stack; - if (n_pte > ARRAY_SIZE(stack)) { - mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL); - if (!mem) - return NULL; - } + struct page **pages; + size_t n_pages, i; + void *vaddr; - area = alloc_vm_area(n_pte << PAGE_SHIFT, mem); - if (!area) { - if (mem != stack) - kvfree(mem); + n_pages = file->f_mapping->host->i_size >> PAGE_SHIFT; + pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) return NULL; - } - ptes = mem; - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (IS_ERR(page)) + for (i = 0; i < n_pages; i++) { + pages[i] = shmem_read_mapping_page_gfp(file->f_mapping, i, + GFP_KERNEL); + if (IS_ERR(pages[i])) goto err_page; - - **ptes++ = mk_pte(page, PAGE_KERNEL); } - if (mem != stack) - kvfree(mem); - + vaddr = vmap(pages, n_pages, VM_MAP_PUT_PAGES, PAGE_KERNEL); + if (!vaddr) + goto err_page; mapping_set_unevictable(file->f_mapping); - return area->addr; - + return vaddr; err_page: - if (mem != stack) - kvfree(mem); - - __shmem_unpin_map(file, area->addr, pfn); + while (--i >= 0) + put_page(pages[i]); + kvfree(pages); return NULL; } void shmem_unpin_map(struct file *file, void *ptr) { mapping_clear_unevictable(file->f_mapping); - __shmem_unpin_map(file, ptr, shmem_npte(file)); + vfree(ptr); } static int __shmem_rw(struct file *file, loff_t off, -- 2.28.0