Received: by 2002:ac8:156:0:b0:3e0:cd10:60c8 with SMTP id f22csp275025qtg; Fri, 31 Mar 2023 01:12:07 -0700 (PDT) X-Google-Smtp-Source: AKy350ZPtgothFwRNAku4xbrQKx/oxV1HbGrJ7Da295FZFqtT8BSdUsjf1KBYjtm/uRJRhRf/fbl X-Received: by 2002:a17:906:15c:b0:8f5:14ab:94bc with SMTP id 28-20020a170906015c00b008f514ab94bcmr22539997ejh.6.1680250327481; Fri, 31 Mar 2023 01:12:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680250327; cv=none; d=google.com; s=arc-20160816; b=ypM/68MEcnEIBLXfbnS/Z2IxTsaIiWfZBUcSnbo6sl2F9dacCJhaEouBnuIakJZ4Du lP26F37wpMZxWmYm16zJhBmHGWPERqlRuuBNmrn2AS8Nl71AvIF9c/wwbDyUQzUjGNOF HGIYZPYJjjJRAODKbJHbbnhRChrZ4FAit4xsvZmn76iwQAgRQhjDdrdvlcUcXAz5HyAR zK7QK/hQjxJjQx5dZJ10bL6l6MdrtyJR4PpLwaWbeSsbEUbjhykFmDempoeks9sHnien jEB8cklNkx0ynkFk5F3BjZqC1+zqhRmIrn9JkxZg5IzhTTCNrDuVzLXjrMRh11XpUySu abFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=EN5NxD27WmeVM4L++1EyvZ0tEoDOF3JfP1mzgsbbaKA=; b=08u95Gy6xcvSO6rakS2/LjHaGOEKeogvVrCrB6H5qhWYowbtjGYSFHOXNYV9g8Hcj6 wgKMFlCKZ0osF8ubYjsghsQpSpPb13JCTsGK6RL9y3dGXtTkSKHk7n6NkbLkVKPdPC0H PCTdasKk/FOWGJozlVTgXiUJQCE3ADg5bY6T09EK3uQHzctRDVwwfeogSoMm9el+8Byh Zulq8AJx2RPo4FjQghloYFdHBuDFjuu9FGi4sVrygNwU9C5siSFES+eoLqoscmmdSE4T ggv4lA6RM78gzd7ObVSeek+7xWfm0VuC6Whoiy73DcvHOrySJu7PqvUwWY0Lx5gWyWLh 8QDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fd388OWF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g5-20020a17090613c500b009314f2c4e80si576805ejc.873.2023.03.31.01.11.42; Fri, 31 Mar 2023 01:12:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fd388OWF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231276AbjCaIGO (ORCPT + 99 others); Fri, 31 Mar 2023 04:06:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231139AbjCaIGI (ORCPT ); Fri, 31 Mar 2023 04:06:08 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C36D1A440 for ; Fri, 31 Mar 2023 01:05:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680249954; x=1711785954; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=tCEPtMFHwMiPnIQhrk5UuQkBneCkpsHzNe9u71qZH+U=; b=fd388OWFN5sASH4Rf7hYuJVETDkOuAqzdNm5PLh6UmB7tRZv+Ww/pfOk HdPZZ7KC4WtuGC9RXahLGG27onTCWVnFc5crb80DXBi16U1H+w2rr6X2g B1H5PH23qbEn6LaGik2zfewgFBpw3sIYP7FgrGc3EEkzxFP1D+09z4zJh ZFGredSWizxRfTUb4LksXrk7SNDgCn4/Nr0dIHiYIcu5gSj2SH/n+xjNy sCRnLJCmx6028ZtiDaJyV7m6J9oKGnqFy/Jej4YJg5FWIVzNR/SZmx7Iy +hxdsmcwGFscqc98TvWYrvvjLO8w3GJBdY5clpZMUesAnao/tgF1ZmWVU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10665"; a="343047699" X-IronPort-AV: E=Sophos;i="5.98,307,1673942400"; d="scan'208";a="343047699" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2023 01:05:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10665"; a="662339930" X-IronPort-AV: E=Sophos;i="5.98,307,1673942400"; d="scan'208";a="662339930" Received: from bpower-mobl3.ger.corp.intel.com (HELO [10.213.225.27]) ([10.213.225.27]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2023 01:05:26 -0700 Message-ID: <78821fe7-a22d-d731-0f5c-b9bcace06e1f@linux.intel.com> Date: Fri, 31 Mar 2023 09:05:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 Subject: Re: [PATCH v2 6/9] drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.c Content-Language: en-US To: Ira Weiny , Zhao Liu , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , Matthew Auld , =?UTF-8?Q?Thomas_Hellstr=c3=b6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?UTF-8?Q?Christian_K=c3=b6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: "Fabio M . De Francesco" , Zhenyu Wang , Zhao Liu , Dave Hansen References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> <20230329073220.3982460-7-zhao1.liu@linux.intel.com> <642654876a503_375f7e294e@iweiny-mobl.notmuch> From: Tvrtko Ursulin Organization: Intel Corporation UK Plc In-Reply-To: <642654876a503_375f7e294e@iweiny-mobl.notmuch> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM,NICE_REPLY_A, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 31/03/2023 04:33, Ira Weiny wrote: > Zhao Liu wrote: >> From: Zhao Liu >> >> The use of kmap_atomic() is being deprecated in favor of >> kmap_local_page()[1], and this patch converts the call from >> kmap_atomic() to kmap_local_page(). >> >> The main difference between atomic and local mappings is that local >> mappings doesn't disable page faults or preemption. >> >> With kmap_local_page(), we can avoid the often unwanted side effect of >> unnecessary page faults or preemption disables. >> >> In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and >> cpu_check() mainly uses mapping to flush cache and check/assign the >> value. >> >> There're 2 reasons why cpu_fill() and cpu_check() don't need to disable >> pagefaults and preemption for mapping: >> >> 1. The flush operation is safe. cpu_fill() and cpu_check() call >> drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since >> CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in >> drm_clflush_virt_range(), the flush operation is global. >> >> 2. Any context switch caused by preemption or page faults (page fault >> may cause sleep) doesn't affect the validity of local mapping. >> >> Therefore, cpu_fill() and cpu_check() are functions where the use of >> kmap_local_page() in place of kmap_atomic() is correctly suited. >> >> Convert the calls of kmap_atomic() / kunmap_atomic() to >> kmap_local_page() / kunmap_local(). >> >> [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com >> >> v2: >> * Dropped hot plug related description since it has nothing to do with >> kmap_local_page(). >> * No code change since v1, and added description of the motivation of >> using kmap_local_page(). >> >> Suggested-by: Dave Hansen >> Suggested-by: Ira Weiny > > First off I think this is fine. > > But as I looked at this final selftests patch I began to wonder how the > memory being mapped here and in the previous selftests patches are > allocated. Does highmem need to be considered at all? Unfortunately, I > could not determine where the memory in the SG list of this test gem > object was allocated. > > AFAICS cpu_fill() is only called in create_test_object(). Digging into > huge_gem_object() did not reveal where these pages were allocated from. > > I wonder if these kmap_local_page() calls could be removed entirely based > on knowing that the pages were allocated from low mem? Removing yet > another user of highmem altogether would be best if possible. > > Do you know how these test objects are created? Do the pages come from > user space somehow? FWIW create_test_object -> huge_gem_object -> i915_gem_object_init(obj, &huge_ops, &lock_class, 0); Which is: static const struct drm_i915_gem_object_ops huge_ops = { .name = "huge-gem", .get_pages = huge_get_pages, .put_pages = huge_put_pages, }; And: huge_get_pages() ... #define GFP (GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL) ... page = alloc_page(GFP | __GFP_HIGHMEM); > > Regardless this is still a step in the right direction so: > > Reviewed-by: Ira Weiny Yeah LGTM. FYI I am yet to read through the rest of the series, but I don't think there will be anything problematic and it passed our CI so likely is good to pull in. Regards, Tvrtko > >> Suggested-by: Fabio M. De Francesco >> Signed-off-by: Zhao Liu >> --- >> Suggested by credits: >> Dave: Referred to his explanation about cache flush. >> Ira: Referred to his task document, review comments and explanation >> about cache flush. >> Fabio: Referred to his boiler plate commit message and his description >> about why kmap_local_page() should be preferred. >> --- >> drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++---- >> 1 file changed, 4 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c >> index a81fa6a20f5a..dcbc0b8e3323 100644 >> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c >> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c >> @@ -481,12 +481,12 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) >> for (n = 0; n < real_page_count(obj); n++) { >> u32 *map; >> >> - map = kmap_atomic(i915_gem_object_get_page(obj, n)); >> + map = kmap_local_page(i915_gem_object_get_page(obj, n)); >> for (m = 0; m < DW_PER_PAGE; m++) >> map[m] = value; >> if (!has_llc) >> drm_clflush_virt_range(map, PAGE_SIZE); >> - kunmap_atomic(map); >> + kunmap_local(map); >> } >> >> i915_gem_object_finish_access(obj); >> @@ -512,7 +512,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, >> for (n = 0; n < real_page_count(obj); n++) { >> u32 *map, m; >> >> - map = kmap_atomic(i915_gem_object_get_page(obj, n)); >> + map = kmap_local_page(i915_gem_object_get_page(obj, n)); >> if (needs_flush & CLFLUSH_BEFORE) >> drm_clflush_virt_range(map, PAGE_SIZE); >> >> @@ -538,7 +538,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, >> } >> >> out_unmap: >> - kunmap_atomic(map); >> + kunmap_local(map); >> if (err) >> break; >> } >> -- >> 2.34.1 >> > >