Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp2160996rdb; Sun, 3 Dec 2023 05:18:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IHCcwYYYf9t/VL7ksFk6pqiToM5l4k90nErMXI53VCFDc3JucX4voAWW54JreBOGCiuLcC/ X-Received: by 2002:a05:6a21:1aa:b0:18f:97c:6149 with SMTP id le42-20020a056a2101aa00b0018f097c6149mr3743423pzb.70.1701609537544; Sun, 03 Dec 2023 05:18:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701609537; cv=none; d=google.com; s=arc-20160816; b=vPqUCCXBCq80Ms8szV4YgSuEnsWhTwIggOMCVzbNnKhpCP1d4VjXekSvXNGZ68//be EbW+/kQI/uBACyfVR1F5+HkxkbPX94SsUm4afYYTBcW1tmhuix49IiSHnU840aJlTYEz f+U9VO5kqOFDdKBsQKbNHmoKWEeAube79+k225bZH9VzqqYMPFnUYD2lNZgzATDiZkVt S0XSisss5bvqTwKUCnEhmAdwHm36NOYBwrfyFGnWimCWgEacLEMoBD18QudJuzknUKqt KFg3EgWUxRsIxOdbI9TSOAtJFXbQPggr17nsU5zXyA6QOwDbOIK/LE52l3fBMPt3h8qt 8fVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=iJIEx0LYAOfufoYVpyzh73LeZu//OEd7OpP4qbomYJc=; fh=aOKhvLhXHslfuzaDqJh6ZHsMiX4q2rttF8xXq/x74s8=; b=OF6J0sIdXEmrX9dItElESlQQt+gTRzQHqi7N8+PN5A/PiM3mgva9FQ5V2jQUWqP9mB TGSPOMFIK99VYlpuxTqDxKWIzfQy+l12K/pPjAoFV+BK0T0AJMxuGkZoT/6Mn8vWKg+Y 6KsCKKvH+OodakALUwvujqKokfmamUD9iNL4a16sLAa2PICjn7P5pOrRyewHeHoGCFtg frHIP3H7tzzYvV4EfOXkKzWrAYIo8uT4zktt56l9KxQaCiXl7+HPC0+wi8UdNVeYAWvb HO/3wkdnMbIA6ApeIjt7GDc+HuGXTKckDq124TPS77aYfdpJz3lUMocXQsLo4Vt4RM3E B0Fg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FFz0Ikiy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id k3-20020a63d103000000b005b3bcd9d7f8si6252859pgg.808.2023.12.03.05.18.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 Dec 2023 05:18:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FFz0Ikiy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A8333805B2CD; Sun, 3 Dec 2023 05:18:54 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233619AbjLCNSj (ORCPT + 99 others); Sun, 3 Dec 2023 08:18:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233660AbjLCNSX (ORCPT ); Sun, 3 Dec 2023 08:18:23 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53B5E173E for ; Sun, 3 Dec 2023 05:18:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701609501; x=1733145501; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B+nJe1q6TxRa2V23foliRrj6Cka4wr6NofSF5rxQ4eQ=; b=FFz0Ikiy+KFB23gCmF0/Mzt5MHfYyqst7tqnChQk74r4UsEbxFMiIjo0 3d50oHbWxS9bLbl09vxd9U4pMLtMggU7W0ce4/JmUlkEy0pQ0ziAGRwIt mBmxCOaxF9giA6cupaaYpjbH2E60fl+PjZBfc/mW8hUbmhKx3m5GAxwt0 lT3ZBuwR5/EUbYb+cO2lgqHC4uKAycL7U3sFJZx8bSh5q+VjZdtoMMZXd 79zH9AxNIRVNft6jRwwRVag8L3OT7qHErfcNhylfnbj8SI9pO234vKSLb +xPhvTMPkYYlewsQoc44zlz0fxQBEkZ/QBreh6nKHzcLzCU+B2/jIW149 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10912"; a="498268" X-IronPort-AV: E=Sophos;i="6.04,247,1695711600"; d="scan'208";a="498268" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2023 05:18:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10912"; a="770262492" X-IronPort-AV: E=Sophos;i="6.04,247,1695711600"; d="scan'208";a="770262492" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.36]) by orsmga002.jf.intel.com with ESMTP; 03 Dec 2023 05:18:16 -0800 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Ira Weiny , "Fabio M . De Francesco" , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Zhenyu Wang , Zhao Liu Subject: [PATCH v3 9/9] drm/i915: Use kmap_local_page() in gem/i915_gem_execbuffer.c Date: Sun, 3 Dec 2023 21:29:47 +0800 Message-Id: <20231203132947.2328805-10-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231203132947.2328805-1-zhao1.liu@linux.intel.com> References: <20231203132947.2328805-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Sun, 03 Dec 2023 05:18:54 -0800 (PST) From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the calls from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by kmap_atomic() in eb_relocate_entry(), and is unmapped by kunmap_atomic() in reloc_cache_reset(). And this mapping/unmapping occurs in two places: one is in eb_relocate_vma(), and another is in eb_relocate_vma_slow(). The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't need to disable pagefaults and preemption during the above mapping/ unmapping. So it can simply use kmap_local_page() / kunmap_local() that can instead do the mapping / unmapping regardless of the context. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny Reviewed-by: Fabio M. De Francesco --- Suggested by credits: Ira: Referred to his task document, review comments. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 683fd8d3151c..18b0f3117074 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1156,7 +1156,7 @@ static void reloc_cache_unmap(struct reloc_cache *cache) vaddr = unmask_page(cache->vaddr); if (cache->vaddr & KMAP) - kunmap_atomic(vaddr); + kunmap_local(vaddr); else io_mapping_unmap_atomic((void __iomem *)vaddr); } @@ -1172,7 +1172,7 @@ static void reloc_cache_remap(struct reloc_cache *cache, if (cache->vaddr & KMAP) { struct page *page = i915_gem_object_get_page(obj, cache->page); - vaddr = kmap_atomic(page); + vaddr = kmap_local_page(page); cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr; } else { @@ -1202,7 +1202,7 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer if (cache->vaddr & CLFLUSH_AFTER) mb(); - kunmap_atomic(vaddr); + kunmap_local(vaddr); i915_gem_object_finish_access(obj); } else { struct i915_ggtt *ggtt = cache_to_ggtt(cache); @@ -1234,7 +1234,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj, struct page *page; if (cache->vaddr) { - kunmap_atomic(unmask_page(cache->vaddr)); + kunmap_local(unmask_page(cache->vaddr)); } else { unsigned int flushes; int err; @@ -1256,7 +1256,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj, if (!obj->mm.dirty) set_page_dirty(page); - vaddr = kmap_atomic(page); + vaddr = kmap_local_page(page); cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr; cache->page = pageno; -- 2.34.1