Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1340204pxk; Fri, 18 Sep 2020 09:50:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxV8ycoqvjwVGeD/HPsfUSnJQ3z15gG8WSuCObCjaTh2G71qQdaf4V8+i+3Pui7istRhKJI X-Received: by 2002:a05:6402:503:: with SMTP id m3mr40080259edv.45.1600447799944; Fri, 18 Sep 2020 09:49:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600447799; cv=none; d=google.com; s=arc-20160816; b=Um8vaSMglcZ628+3auriP07YHtJwDutzJQuoRs0bNIWAU3GjL5aN4IFQZqMQ5yCQbZ e+kyC/5xdFoHLFnZUdalRh7geJBzz4QqYq/YPU76S5WdICP6Xe/WN5zAaMJYNrnkUFVV qkCYrm9p7trEzNnvmNxuNyZAISqy9unaKUglq7uQ6fOaM4VHJIVUEcIe4MUdMRAhDyue XkKOyjfokJSleLWjkXjHA6Fbd3vvrfSQv7KNCH4o/Gnpy8HnvN3Ed1B7usO+iNGqSia9 WeY2EeRk75H/kayQW389LkF7FYoLd9si0+S8fM1664/jyIsvONHy0YcOfidKIbkNpfp5 I1Qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7fs8HGSpQrTl+v/OwgRrDMGgHGbn/JMy8asyFQyDmh8=; b=GKX8WnVgYZ1yzc4zRiBbmQHg7ACP35BtxcgdZHMhUBmu85zgaz+/m9x6+WE7tfTGlJ 7RjSD6rlZ13Cn3eF3oK7NOMp+XVS9bZ9+8IZY+FGkAa6/kvoBl3XYnnOi1SJVUW0u9pj HX62h56dnZY8i4inDZV796bLuOHz+gDCXtDfWISyBxW+4P8aRHkA9DYB4ar9fCmn0cYV fpuhjpcNan4VWo6EZTu5YjUnLdYXPBOnhm3jOTIS7tJWJwXF49sL8bFeQ+zPzJEKMobD QJ4Zm8727htOEm4PYxgnu/Fga3FjBymDf+fIVzN9VFeupYjBOPvs6R5p+sSY8K1YEsrF 9vRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=e9komF0Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l14si2580923eje.378.2020.09.18.09.49.37; Fri, 18 Sep 2020 09:49:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=e9komF0Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726426AbgIRQqT (ORCPT + 99 others); Fri, 18 Sep 2020 12:46:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726304AbgIRQqS (ORCPT ); Fri, 18 Sep 2020 12:46:18 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DA42C0613CE for ; Fri, 18 Sep 2020 09:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7fs8HGSpQrTl+v/OwgRrDMGgHGbn/JMy8asyFQyDmh8=; b=e9komF0Y9BGr2JEd5nRCwfZGwu WgoyBjM/v1bwwxG5OopAOaU35R+LcutCR1SpMFCWxQCpjx9g6cixuVx7KaT0fhA2NuU5cCOunJo+X fW/qXOKpcoUDMx+c/bls1dGccdJCa3KD46PJkEv+RVR9MDU8wWxOExdg9c/bL3lC7TJcRe+xbhRq/ Y5aFYw3lFs85h8w/8cjIxpDc1vkJdBCRBkBlm8B3G5PAEhdtWSSqdoEymKe3ocX4cREzfa2jtiSb5 AcpuMQXGe9/JBS2UlCP/59di88TNsIBDuUeD8gl6+7QLqmcSeBs+Q9mGT7sOHMcE266jnGbc3SS6g 6CGVxrlw==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJWS-0007Vm-Ib; Fri, 18 Sep 2020 16:46:08 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 3/6] drm/i915: use vmap in shmem_pin_map Date: Fri, 18 Sep 2020 18:37:21 +0200 Message-Id: <20200918163724.2511-4-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org shmem_pin_map somewhat awkwardly reimplements vmap using alloc_vm_area and manual pte setup. The only practical difference is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't seem to be required here (and could be added to vmap using a flag if actually required). Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/gt/shmem_utils.c | 90 +++++++++++---------------- 1 file changed, 38 insertions(+), 52 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 43c7acbdc79dea..77410091597f19 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -49,80 +49,66 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) return file; } -static size_t shmem_npte(struct file *file) +static size_t shmem_npages(struct file *file) { return file->f_mapping->host->i_size >> PAGE_SHIFT; } -static void __shmem_unpin_map(struct file *file, void *ptr, size_t n_pte) -{ - unsigned long pfn; - - vunmap(ptr); - - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (!WARN_ON(IS_ERR(page))) { - put_page(page); - put_page(page); - } - } -} - void *shmem_pin_map(struct file *file) { - const size_t n_pte = shmem_npte(file); - pte_t *stack[32], **ptes, **mem; - struct vm_struct *area; - unsigned long pfn; - - mem = stack; - if (n_pte > ARRAY_SIZE(stack)) { - mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL); - if (!mem) + const size_t n_pages = shmem_npages(file); + struct page **pages, *stack[32]; + void *vaddr; + long i; + + pages = stack; + if (n_pages > ARRAY_SIZE(stack)) { + pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) return NULL; } - area = alloc_vm_area(n_pte << PAGE_SHIFT, mem); - if (!area) { - if (mem != stack) - kvfree(mem); - return NULL; - } - - ptes = mem; - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (IS_ERR(page)) + for (i = 0; i < n_pages; i++) { + pages[i] = shmem_read_mapping_page_gfp(file->f_mapping, i, + GFP_KERNEL); + if (IS_ERR(pages[i])) goto err_page; - - **ptes++ = mk_pte(page, PAGE_KERNEL); } - if (mem != stack) - kvfree(mem); + vaddr = vmap(pages, n_pages, 0, PAGE_KERNEL); + if (!vaddr) + goto err_page; + if (pages != stack) + kvfree(pages); mapping_set_unevictable(file->f_mapping); - return area->addr; + return vaddr; err_page: - if (mem != stack) - kvfree(mem); - - __shmem_unpin_map(file, area->addr, pfn); + while (--i >= 0) + put_page(pages[i]); + if (pages != stack) + kvfree(pages); return NULL; } void shmem_unpin_map(struct file *file, void *ptr) { + long i = shmem_npages(file); + mapping_clear_unevictable(file->f_mapping); - __shmem_unpin_map(file, ptr, shmem_npte(file)); + vunmap(ptr); + + for (i = 0; i < shmem_npages(file); i++) { + struct page *page; + + page = shmem_read_mapping_page_gfp(file->f_mapping, i, + GFP_KERNEL); + if (!WARN_ON(IS_ERR(page))) { + put_page(page); + put_page(page); + } + } } static int __shmem_rw(struct file *file, loff_t off, -- 2.28.0