Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp8113472rwn; Wed, 14 Sep 2022 09:07:33 -0700 (PDT) X-Google-Smtp-Source: AA6agR7/FX8ragiyXselv1lZnE6UZenA9axV5vU5hgHgFGLgdykC2ayfaBoTLLgWOMbFbKDjqqql X-Received: by 2002:aa7:d883:0:b0:44e:bbbe:d661 with SMTP id u3-20020aa7d883000000b0044ebbbed661mr31460506edq.248.1663171653132; Wed, 14 Sep 2022 09:07:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663171653; cv=none; d=google.com; s=arc-20160816; b=wEwYV8CmoRU29E8JCqFd5pd5WJGk16fGLd7dCZ8airgF1zRJKYsYFbNj5fod337eGR takx/lsz9yOEiCPx8plq9YF9fcdOV2HG/R05a/PppK0hiO+1Baetd5W5Y06MbZn1IAu0 k7ncoNUEQa8IFsw1D7Ob6k3a8yLwd50Jk1tOkobunzR9sTvq5BYkJUSIGSOJGcmdaU3z 80/IA98XfFLmU2hpkhBt3J50aZpegVqOBuxaQFcaQc7Kv7BYjEZkAJv61lAyn1p76P4E OP/mE7225/ivTu+G7FhkLFtUv6Ty1j1CxZ5UF/0GTjoI13ebxFqEazyAbtYK8qMwxA27 UmKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=3JaMWq2Qv4URX9QPyV0tnvytNLKpZtfW7ggFYQPgsz8=; b=J9iO9S80Ghg4phTMjmErkTuqf0jHmGUB+nxoY/LzeYKQvV6pRF6qOnv2DavpHqS/B1 EVVR2gCvqiFuMDPpnDJIi4riIYwDkAfpgV8tq9zTCqMzOoQdMHmAPcL/fWXV5NzMlcWg BL/vxCagmTvC3SuI3s7O8kiSyBeUSsBApQ3lDoCJWR69T+lNY1yRT91PRndD2Mwy/mGI AUc0Rclk7mydurO+a4PBo5+dpao88v62uV0Epzy6sHpfmDwxzSM26PWpXhvcBM0HIcUj 96jqHZbaNniNeOjYvijSV8RZmUBWbFY4dfFnIXuIRgAnIDgoqjbK8uKBT+QCxdTnXjsn WuBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=hF7WUaBM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r9-20020a170906548900b0073dced7204bsi9727887ejo.767.2022.09.14.09.06.54; Wed, 14 Sep 2022 09:07:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=hF7WUaBM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230141AbiINPSd (ORCPT + 99 others); Wed, 14 Sep 2022 11:18:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229609AbiINPS2 (ORCPT ); Wed, 14 Sep 2022 11:18:28 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A132F80485 for ; Wed, 14 Sep 2022 08:18:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663168706; x=1694704706; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=bTQ3Q+JplMOJK966x5fjnamI5N8tHqH9BVxtx4/dwew=; b=hF7WUaBMq0zb/m6WHtsLJV3m+xqe1qVMt9//lH5+A3dbi5Kzddmb0vFM nIusP+GAGIyBzdoa1+tUarpqZ4kU/mK2TpLDS8nBHfvu5GsziQzLxd/I0 lTdZoiGwIY24g6QQ/9dITMJqQAXCBWBpguD1oqQDZcfgVqhS9IF+7gEsA oKjQcPpdgEj2tiiVHLhi+cKJVrfKTfdtAB4Z4Q/eCOKJmH39y6WzVfEfC xGv2tybt71EcV+9AgmESCYns+uUOjrGhrpXtEtz8DHY/yOwfLkpdr7r/n ceNnaWInYkwgG/CriwYQz4iQo2MWFVjaEAxaFctwZ8KAmFXt4wtf5LZQ4 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="298453965" X-IronPort-AV: E=Sophos;i="5.93,315,1654585200"; d="scan'208";a="298453965" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2022 08:18:26 -0700 X-IronPort-AV: E=Sophos;i="5.93,315,1654585200"; d="scan'208";a="679089978" Received: from gcapodan-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.251.209.179]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Sep 2022 08:18:21 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 76E0010466D; Wed, 14 Sep 2022 18:18:18 +0300 (+03) Date: Wed, 14 Sep 2022 18:18:18 +0300 From: "Kirill A. Shutemov" To: Ashok Raj Cc: "Kirill A. Shutemov" , Ashok Raj , Dave Hansen , Andy Lutomirski , Peter Zijlstra , x86@kernel.org, Kostya Serebryany , Andrey Ryabinin , Andrey Konovalov , Alexander Potapenko , Taras Madan , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jacon Jun Pan , Jason Gunthorpe , Joerg Roedel Subject: Re: [PATCHv8 00/11] Linear Address Masking enabling Message-ID: <20220914151818.uupzpyd333qnnmlt@box.shutemov.name> References: <20220830010104.1282-1-kirill.shutemov@linux.intel.com> <20220904003952.fheisiloilxh3mpo@box.shutemov.name> <20220912224930.ukakmmwumchyacqc@box.shutemov.name> <20220914144518.46rhhyh7zmxieozs@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 14, 2022 at 08:11:19AM -0700, Ashok Raj wrote: > On Wed, Sep 14, 2022 at 05:45:18PM +0300, Kirill A. Shutemov wrote: > > On Tue, Sep 13, 2022 at 01:49:30AM +0300, Kirill A. Shutemov wrote: > > > On Sun, Sep 04, 2022 at 03:39:52AM +0300, Kirill A. Shutemov wrote: > > > > On Thu, Sep 01, 2022 at 05:45:08PM +0000, Ashok Raj wrote: > > > > > Hi Kirill, > > > > > > > > > > On Tue, Aug 30, 2022 at 04:00:53AM +0300, Kirill A. Shutemov wrote: > > > > > > Linear Address Masking[1] (LAM) modifies the checking that is applied to > > > > > > 64-bit linear addresses, allowing software to use of the untranslated > > > > > > address bits for metadata. > > > > > > > > > > We discussed this internally, but didn't bubble up here. > > > > > > > > > > Given that we are working on enabling Shared Virtual Addressing (SVA) > > > > > within the IOMMU. This permits user to share VA directly with the device, > > > > > and the device can participate even in fixing page-faults and such. > > > > > > > > > > IOMMU enforces canonical addressing, since we are hijacking the top order > > > > > bits for meta-data, it will fail sanity check and we would return a failure > > > > > back to device on any page-faults from device. > > > > > > > > > > It also complicates how device TLB and ATS work, and needs some major > > > > > improvements to detect device capability to accept tagged pointers, adjust > > > > > the devtlb to act accordingly. > > > > > > > > > > > > > > > Both are orthogonal features, but there is an intersection of both > > > > > that are fundamentally incompatible. > > > > > > > > > > Its even more important, since an application might be using SVA under the > > > > > cover provided by some library that's used without their knowledge. > > > > > > > > > > The path would be: > > > > > > > > > > 1. Ensure both LAM and SVM are incompatible by design, without major > > > > > changes. > > > > > - If LAM is enabled already and later SVM enabling is requested by > > > > > user, that should fail. and Vice versa. > > > > > - Provide an API to user to ask for opt-out. Now they know they > > > > > must sanitize the pointers before sending to device, or the > > > > > working set is already isolated and needs no work. > > > > > > > > The patch below implements something like this. It is PoC, build-tested only. > > > > > > > > To be honest, I hate it. It is clearly a layering violation. It feels > > > > dirty. But I don't see any better way as we tie orthogonal features > > > > together. > > > > > > > > Also I have no idea how to make forced PASID allocation if LAM enabled. > > > > What the API has to look like? > > > > > > Jacob, Ashok, any comment on this part? > > > > > > I expect in many cases LAM will be enabled very early (like before malloc > > > is functinal) in process start and it makes PASID allocation always fail. > > > > > > Any way out? > > > > We need closure on this to proceed. Any clue? > > Failing PASID allocation seems like the right thing to do here. If the > application is explicitly allocating PASID's it can opt-out using the > similar mechanism you have for LAM enabling. So user takes > responsibility for sanitizing pointers. > > If some library is using an accelerator without application knowledge, > that would use the failure as a mechanism to use an alternate path if > one exists. > > I don't know if both LAM and SVM need a separate forced opt-in (or i > don't have an opinion rather). Is this what you were asking? > > + Joerg, JasonG in case they have an opinion. My point is that the patch provides a way to override LAM vs. PASID mutual exclusion, but only if PASID allocated first. If we enabled LAM before PASID is allcoated there's no way to forcefully allocate PASID, bypassing LAM check. I think there should be one, no? -- Kiryl Shutsemau / Kirill A. Shutemov