Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp3683163iog; Mon, 27 Jun 2022 23:56:55 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vsdC03JtyEK0Iietr7pklBf8UeqAb5R5gAYibwuXhLiJsNWUCZo6NY+XxgQLHBJHdKUBbS X-Received: by 2002:a17:906:7309:b0:722:f9e9:3904 with SMTP id di9-20020a170906730900b00722f9e93904mr16556780ejc.198.1656399414906; Mon, 27 Jun 2022 23:56:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656399414; cv=none; d=google.com; s=arc-20160816; b=kB448XgjIB1e/jvjk0bwDgqGRZ+YjYQNDtMum6BDkJWeZrRH62HSzLXDRKzl67isor OKpj7ZewrG4XEt0ZH+tnwmfa+X8AtF3hQTClTi/1GQxy2KuHRHUwgQUx3aD9Qko373hr R3MJRILHiqJEvY6kUnrbr7dXSRYSsYg2huaczXPviB//GlBRqRCoeHzi4BGRKU0lZP6x qIK9g+SYe1pgY3ar1MDu0PsRX0Laj8JNtN1ikyA5589MWKfJ5/3eczOs1vMlt6d6tXQK kcapsZk+DjSkINAnlK09KjokpFHvwOgPxomO07mX2XYxd5lWnsd/sAvelx9AS/GLgvlR 0cvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:to:content-language:subject:cc:user-agent:mime-version :date:message-id:dkim-signature; bh=VMEeolYOuaWulWy4eYryUA1O/53g3vIHAzw7HZm0bCg=; b=NOoKIE5Q13Dndd5aHYYhfEPksbtV73dQ8v32j80p9SWUmAU2PKC9OwMV0FfrGCBLzM p7f7XPNlwN0truRE1MNKXv9ZFCH5Zj1uGNDjudNw2QEIqS8L4r0hk8/KjnoknWI33tQu 5nfkqOp0sNKoEbUO2ROU1iSs5tP8FOA74yzgtbqCvR9BdQld6j86F1C50Rt2yzrJD8m/ AUTzBixsRCQuZyj0zppZ/3CmlRx+0X+oNByyl7iDA4lFk+hR1QV/gjUpg5bYSqLlBHgQ 9exlcRrJFepAkjVutkUMxBl8Xkqm+jsz064G/pLOkBrkS9pEtOQTNY9gy/zPG0HsAor3 zXVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ls9RLgZW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sh40-20020a1709076ea800b00709af14e386si15569455ejc.205.2022.06.27.23.56.30; Mon, 27 Jun 2022 23:56:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ls9RLgZW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245527AbiF1G2Y (ORCPT + 99 others); Tue, 28 Jun 2022 02:28:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245343AbiF1G2W (ORCPT ); Tue, 28 Jun 2022 02:28:22 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77DE112AE7 for ; Mon, 27 Jun 2022 23:28:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656397701; x=1687933701; h=message-id:date:mime-version:cc:subject:to:references: from:in-reply-to:content-transfer-encoding; bh=9IUUyhkLUoG7pm0kZNfCYFqtdjz75ZQTTtDHGO+EHGI=; b=ls9RLgZWL2bTspO5d1OP1H+7myZCnjX8UDIIlPY8aH6Eo7ddGCgDAu5B F85IiYPgac6Ae3jOy/RwvBg/7MErCLU5+0aWJOhyFPc4xxiVErfksvW9a 26nz0sH1p0I6TfW92OwlUfiJ3iq+ygJ8y8MRJKhZFEBSN9QBjIGy4mQW3 nvXG0WP+rpYGNxoC9f1NTBKB5xLPvHwlTPc0YsdcQqDUVWAj7Olfe6RqB 1MB/1nThl7Uh+SCT+ArsLIDIU2h+dwNOuTRFYTzkuGTQe1TRnlnC8Ylya 7/5kO6XNoPTX1EkVAhQBJpiYWUDxWp/IgWFNLHI5IDMebgPrCl/ztpn23 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="270396815" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="270396815" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 23:28:20 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="646773805" Received: from ltang11-mobl1.ccr.corp.intel.com (HELO [10.249.169.64]) ([10.249.169.64]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 23:28:16 -0700 Message-ID: <693a3604-d70b-e08c-2621-7f0cb9bdb6ca@linux.intel.com> Date: Tue, 28 Jun 2022 14:28:14 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Cc: baolu.lu@linux.intel.com, Eric Auger , Liu Yi L , Jacob jun Pan , iommu@lists.linux-foundation.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Jean-Philippe Brucker Subject: Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling Content-Language: en-US To: Ethan Zhao , Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj , Will Deacon , Robin Murphy , Jean-Philippe Brucker , Dave Jiang , Vinod Koul References: <20220621144353.17547-1-baolu.lu@linux.intel.com> <20220621144353.17547-11-baolu.lu@linux.intel.com> From: Baolu Lu In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Ethan, On 2022/6/27 21:03, Ethan Zhao wrote: > Hi, > > 在 2022/6/21 22:43, Lu Baolu 写道: >> Tweak the I/O page fault handling framework to route the page faults to >> the domain and call the page fault handler retrieved from the domain. >> This makes the I/O page fault handling framework possible to serve more >> usage scenarios as long as they have an IOMMU domain and install a page >> fault handler in it. Some unused functions are also removed to avoid >> dead code. >> >> The iommu_get_domain_for_dev_pasid() which retrieves attached domain >> for a {device, PASID} pair is used. It will be used by the page fault >> handling framework which knows {device, PASID} reported from the iommu >> driver. We have a guarantee that the SVA domain doesn't go away during >> IOPF handling, because unbind() waits for pending faults with >> iopf_queue_flush_dev() before freeing the domain. Hence, there's no need >> to synchronize life cycle of the iommu domains between the unbind() and >> the interrupt threads. >> >> Signed-off-by: Lu Baolu >> Reviewed-by: Jean-Philippe Brucker >> --- >>   drivers/iommu/io-pgfault.c | 64 +++++--------------------------------- >>   1 file changed, 7 insertions(+), 57 deletions(-) >> >> diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c >> index aee9e033012f..4f24ec703479 100644 >> --- a/drivers/iommu/io-pgfault.c >> +++ b/drivers/iommu/io-pgfault.c >> @@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, >> struct iopf_fault *iopf, >>       return iommu_page_response(dev, &resp); >>   } >> -static enum iommu_page_response_code >> -iopf_handle_single(struct iopf_fault *iopf) >> -{ >> -    vm_fault_t ret; >> -    struct mm_struct *mm; >> -    struct vm_area_struct *vma; >> -    unsigned int access_flags = 0; >> -    unsigned int fault_flags = FAULT_FLAG_REMOTE; >> -    struct iommu_fault_page_request *prm = &iopf->fault.prm; >> -    enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; >> - >> -    if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) >> -        return status; >> - >> -    mm = iommu_sva_find(prm->pasid); >> -    if (IS_ERR_OR_NULL(mm)) >> -        return status; >> - >> -    mmap_read_lock(mm); >> - >> -    vma = find_extend_vma(mm, prm->addr); >> -    if (!vma) >> -        /* Unmapped area */ >> -        goto out_put_mm; >> - >> -    if (prm->perm & IOMMU_FAULT_PERM_READ) >> -        access_flags |= VM_READ; >> - >> -    if (prm->perm & IOMMU_FAULT_PERM_WRITE) { >> -        access_flags |= VM_WRITE; >> -        fault_flags |= FAULT_FLAG_WRITE; >> -    } >> - >> -    if (prm->perm & IOMMU_FAULT_PERM_EXEC) { >> -        access_flags |= VM_EXEC; >> -        fault_flags |= FAULT_FLAG_INSTRUCTION; >> -    } >> - >> -    if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) >> -        fault_flags |= FAULT_FLAG_USER; >> - >> -    if (access_flags & ~vma->vm_flags) >> -        /* Access fault */ >> -        goto out_put_mm; >> - >> -    ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL); >> -    status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : >> -        IOMMU_PAGE_RESP_SUCCESS; >> - >> -out_put_mm: >> -    mmap_read_unlock(mm); >> -    mmput(mm); >> - >> -    return status; >> -} >> - > > Once the iopf_handle_single() is removed, the name of > iopf_handle_group() looks a little weired > > and confused, does this group mean the iommu group (domain) ? while I > take some minutes to No. This is not the iommu group. It's page request group defined by the PCI SIG spec. Multiple page requests could be put in a group with a same group id. All page requests in a group could be responded to device in one shot. Best regards, baolu > > look into the code, oh, means a batch / list / queue  of iopfs , and > iopf_handle_group() becomes a > > generic iopf_handler() . > > Doe it make sense to revise the names of iopf_handle_group(), > iopf_complete_group,  iopf_group in > > this patch set ? > > > Thanks, > > Ethan > >>   static void iopf_handle_group(struct work_struct *work) >>   { >>       struct iopf_group *group; >> +    struct iommu_domain *domain; >>       struct iopf_fault *iopf, *next; >>       enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; >>       group = container_of(work, struct iopf_group, work); >> +    domain = iommu_get_domain_for_dev_pasid(group->dev, >> +                group->last_fault.fault.prm.pasid); >> +    if (!domain || !domain->iopf_handler) >> +        status = IOMMU_PAGE_RESP_INVALID; >>       list_for_each_entry_safe(iopf, next, &group->faults, list) { >>           /* >> @@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct >> *work) >>            * faults in the group if there is an error. >>            */ >>           if (status == IOMMU_PAGE_RESP_SUCCESS) >> -            status = iopf_handle_single(iopf); >> +            status = domain->iopf_handler(&iopf->fault, >> +                              domain->fault_data); >>           if (!(iopf->fault.prm.flags & >>                 IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) >