Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp879110ybt; Wed, 1 Jul 2020 12:16:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwHaWOnQdiNPMt9L2PZgCT3jslmrn5AdQHj+tOlXZJSxJVhOlmN9lC3oub5aKrEz81YgmR X-Received: by 2002:aa7:c54f:: with SMTP id s15mr31840766edr.175.1593631015047; Wed, 01 Jul 2020 12:16:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593631015; cv=none; d=google.com; s=arc-20160816; b=IFb/4+aog+toeVyTnwCdXKpJ+SxjehRAM62HMNYb2DcF14big6X6U25iyG+9xMI1lZ fuUNt+LsBaX2qAMZ/Yp18iWVts2xYm/cnVVxPDTNqrx0JmLDellC2M8d85CnC19sk77j USY67Y9zzOKUTGOCV3mL2dUh4zsN6I5vh/kJ7cKQ5gZjQnpbb8PM6DQ+CofNEZXqsE07 wx3MrY1ukSZk35fXdyQ+Tk+qC33XaOg3ZY2zWGHeUudjSKXM4P4fQwPCTyKRA6k0W8w0 1s8lUCyp1cWB9i3MBeNiWqVQM4wsfg/BokXwqd+chnt4hOgqLac25Q85pTdpF1xpina5 i0AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=6qmEgakRFpckzKszt3leObtw1dr7PCiF6kFxTpek9Lg=; b=dPfYGu2J8pN495zZo7JYTc/DyFN0eStTsz/WiRnt+mRNrgGywf7FyWM8FaRU6V2hNO xuIL9ODuMFzBGyPQyfUFVl5bUcWD4UvJOqxq3iTCZgi5ChbAsaYP5V7vQb+ZtIZYHZkF paBFNp1VAduibYU2KUPoRaEUxES2N1sdk5mB2ifvtStNavzDp3jL4QJJvYfIQfz1UVW3 lt7oh04bU1GlnI0+pADqYjEFs4Hg5WAozhljhM9MX2j1Y004pCCQ2lFWp4BSSNoP3Bt5 GwBLOFsRgRYBmjdn0V6Cp+t0OAsxb7Dk8KGemCP5PyCoJ8ZV8TdoVw7OQl1H2/Fb/xKK sfrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t22si4230132eje.467.2020.07.01.12.16.30; Wed, 01 Jul 2020 12:16:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726255AbgGATOQ (ORCPT + 99 others); Wed, 1 Jul 2020 15:14:16 -0400 Received: from foss.arm.com ([217.140.110.172]:40402 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725875AbgGATOP (ORCPT ); Wed, 1 Jul 2020 15:14:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E38971FB; Wed, 1 Jul 2020 12:14:14 -0700 (PDT) Received: from [10.57.21.32] (unknown [10.57.21.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 71F743F68F; Wed, 1 Jul 2020 12:14:12 -0700 (PDT) Subject: Re: [PATCH v8 3/3] iommu/arm-smmu: Add global/context fault implementation hooks To: Krishna Reddy , Jonathan Hunter Cc: Sachin Nikam , "nicoleotsuka@gmail.com" , Mikko Perttunen , Bryan Huntsman , "will@kernel.org" , "linux-kernel@vger.kernel.org" , Pritesh Raithatha , Timo Alho , "iommu@lists.linux-foundation.org" , Nicolin Chen , "linux-tegra@vger.kernel.org" , Yu-Huan Hsu , Thierry Reding , "linux-arm-kernel@lists.infradead.org" , Bitan Biswas References: <20200630001051.12350-1-vdumpa@nvidia.com> <20200630001051.12350-4-vdumpa@nvidia.com> <4b4b20af-7baa-0987-e40d-af74235153f6@nvidia.com> <6c2ce909-c71b-351f-79f5-b1a4b4c0e4ac@arm.com> From: Robin Murphy Message-ID: <446ffe79-3a44-5d41-459f-b698a1cc361b@arm.com> Date: Wed, 1 Jul 2020 20:14:10 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020-07-01 19:48, Krishna Reddy wrote: >>>> + for (inst = 0; inst < nvidia_smmu->num_inst; inst++) { >>>> + irq_ret = nvidia_smmu_global_fault_inst(irq, smmu, inst); >>>> + if (irq_ret == IRQ_HANDLED) >>>> + return irq_ret; >>> >>> Any chance there could be more than one SMMU faulting by the time we >>> service the interrupt? > >> It certainly seems plausible if the interconnect is automatically load-balancing requests across the SMMU instances - say a driver bug caused a buffer to be unmapped too early, there could be many in-flight accesses to parts of that buffer that aren't all taking the same path and thus could now fault in parallel. >> [ And anyone inclined to nitpick global vs. context faults, s/unmap a buffer/tear down a domain/ ;) ] >> Either way I think it would be easier to reason about if we just handled these like a typical shared interrupt and always checked all the instances. > > It would be optimal to check at the same time across all instances. > >>>> + for (idx = 0; idx < smmu->num_context_banks; idx++) { >>>> + irq_ret = nvidia_smmu_context_fault_bank(irq, smmu, >>>> + idx, >>>> + inst); >>>> + >>>> + if (irq_ret == IRQ_HANDLED) >>>> + return irq_ret; >>> >>> Any reason why we don't check all banks? > >> As above, we certainly shouldn't bail out without checking the bank for the offending domain across all of its instances, and I guess the way this works means that we would have to iterate all the banks to achieve that. > > With shared irq line, the context fault identification is not optimal already. Reading all the context banks all the time can be additional mmio read overhead. But, it may not hurt the real use cases as these happen only when there are bugs. Right, I did ponder the idea of a whole programmatic "request_context_irq" hook that would allow registering the handler for both interrupts with the appropriate context bank and instance data, but since all interrupts are currently unexpected it seems somewhat hard to justify the extra complexity. Obviously we can revisit this in future if you want to start actually doing something with faults like the qcom GPU folks do. Robin.