Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp119702ybh; Tue, 21 Jul 2020 18:08:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyOO3gRNOyc6OktkQIm/0uudEome/CWWPkTmJCJvisDmKrHVSHwn/SKS68+RwRQUlAjX7Cm X-Received: by 2002:a17:906:8392:: with SMTP id p18mr29461790ejx.24.1595380131170; Tue, 21 Jul 2020 18:08:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595380131; cv=none; d=google.com; s=arc-20160816; b=j8RvghzU1b7fZjSdqcy24FAaQweUkJTaBXtzf1x5tq401YYg3Kjp/Bp1Fy5g+1Ohp5 pnE5yG5iBp61lX+wECgZyzAKqg4pgXdwR3/zfZsRGM/nNak9GJ2ciutG3RRc5YxPVyms 0wEhWTEOum2wtqUwOvQ7PynwSAbLVHDVHbGN2nwYaLufynYFVXMu/cJwQ4wE65MVkIHp QI6+Kbhy7Orjrpa0fSBLAMDHhgyAN1Xumf8WB6B8kp0BlafT/Et9pSzprn8b2K7Y7sEy 1NbouXkeArjQoFmPLAGdLE2+QtaFHUP3mp1LrIuLH7jg8hohlNBBvNYDvbFOiKy34fQw shuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:to:subject:cc:ironport-sdr:ironport-sdr; bh=rn8sb/I+hzrjaK6+NQzZk6yCHbnAQCLejOyRzHsStIM=; b=Nk4xO+ubeoZntKXzpx5oFEL7K0XHYJyvzK3VpIABedHCWtovPE7OPtku9NOvOKdnjZ 7Va33ngaJcTmyOjLo4dZqk1Nv7qjrcscUaTbrCIcapCOIIcRGSq9a3yoQ9JuXiP3ENhO 9vV6ECOFTtrUrU1mUziclXzjOSsrKkd+0/rvV8EuczrreATJvO8UBpKwUPj3BzR55aKq /nsT8nzlzGny4f/eI25ye2zjrPCJDOR6wuD/xHgwJGt7fe7Xt9vgYYNX6tzc09sdDnPd LSS4v1pqEDul1GpXGz2qdVLb2WB2TfNr8n2dBSOU4+ZEUVYL2Jlizh+DFp675CPILDgz KvEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 18si11767998eju.472.2020.07.21.18.08.28; Tue, 21 Jul 2020 18:08:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731525AbgGVBGO (ORCPT + 99 others); Tue, 21 Jul 2020 21:06:14 -0400 Received: from mga05.intel.com ([192.55.52.43]:26542 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726959AbgGVBGN (ORCPT ); Tue, 21 Jul 2020 21:06:13 -0400 IronPort-SDR: LNRIsgH6Ugs4ewlMnT6dqHK4PSvLbQ+BPGLvHvHBqE2shF5LkB1/3Cexs8tyfG1Mk2ajz/uDTc 7wF4XR+O/JBA== X-IronPort-AV: E=McAfee;i="6000,8403,9689"; a="235118367" X-IronPort-AV: E=Sophos;i="5.75,381,1589266800"; d="scan'208";a="235118367" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2020 18:06:13 -0700 IronPort-SDR: rRK2QOhEswznu82D5O8AVJvsH3HtTfgWnE7Secv6pJtJlwm4qJNo8h8La5U3dnaOd5xK4cQNjM GVX7xVlahmiA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,381,1589266800"; d="scan'208";a="462275980" Received: from allen-box.sh.intel.com (HELO [10.239.159.139]) ([10.239.159.139]) by orsmga005.jf.intel.com with ESMTP; 21 Jul 2020 18:06:10 -0700 Cc: baolu.lu@linux.intel.com, Yi Liu , "Tian, Kevin" , Raj Ashok , Eric Auger Subject: Re: [PATCH v4 4/7] iommu/vt-d: Handle non-page aligned address To: Jacob Pan , iommu@lists.linux-foundation.org, LKML , Joerg Roedel , David Woodhouse References: <1594080774-33413-1-git-send-email-jacob.jun.pan@linux.intel.com> <1594080774-33413-5-git-send-email-jacob.jun.pan@linux.intel.com> <20200721095036.1977e3bf@jacob-builder> From: Lu Baolu Message-ID: <7a8f70af-f39b-1b57-a9eb-db085ab63149@linux.intel.com> Date: Wed, 22 Jul 2020 09:01:27 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200721095036.1977e3bf@jacob-builder> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jacob, On 7/22/20 12:50 AM, Jacob Pan wrote: > Hi Baolu, > > Not sure what state is this patch in, there is a bug in this patch > (see below), shall I send out an updated version of this one only? or > another incremental patch. Please send an updated version. I hope Joerg could pick these as 5.8 fix. Best regards, baolu > > Thanks, > > Jacob > > On Mon, 6 Jul 2020 17:12:51 -0700 > Jacob Pan wrote: > >> From: Liu Yi L >> >> Address information for device TLB invalidation comes from userspace >> when device is directly assigned to a guest with vIOMMU support. >> VT-d requires page aligned address. This patch checks and enforce >> address to be page aligned, otherwise reserved bits can be set in the >> invalidation descriptor. Unrecoverable fault will be reported due to >> non-zero value in the reserved bits. >> >> Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation >> cache types") >> Acked-by: Lu Baolu >> Signed-off-by: Liu Yi L >> Signed-off-by: Jacob Pan >> >> --- >> drivers/iommu/intel/dmar.c | 20 ++++++++++++++++++-- >> 1 file changed, 18 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c >> index d9f973fa1190..b2c53bada905 100644 >> --- a/drivers/iommu/intel/dmar.c >> +++ b/drivers/iommu/intel/dmar.c >> @@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct >> intel_iommu *iommu, u16 sid, u16 pfsid, >> * Max Invs Pending (MIP) is set to 0 for now until we have >> DIT in >> * ECAP. >> */ >> - desc.qw1 |= addr & ~mask; >> - if (size_order) >> + if (addr & GENMASK_ULL(size_order + VTD_PAGE_SHIFT, 0)) >> + pr_warn_ratelimited("Invalidate non-aligned address >> %llx, order %d\n", addr, size_order); + >> + /* Take page address */ >> + desc.qw1 = QI_DEV_EIOTLB_ADDR(addr); >> + >> + if (size_order) { >> + /* >> + * Existing 0s in address below size_order may be >> the least >> + * significant bit, we must set them to 1s to avoid >> having >> + * smaller size than desired. >> + */ >> + desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT, >> + VTD_PAGE_SHIFT); > Yi reported the issue, it should be: > desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT - 1, > VTD_PAGE_SHIFT); > >> + /* Clear size_order bit to indicate size */ >> + desc.qw1 &= ~mask; >> + /* Set the S bit to indicate flushing more than 1 >> page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE; >> + } >> >> qi_submit_sync(iommu, &desc, 1, 0); >> } > > [Jacob Pan] >