Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp523218imu; Thu, 20 Dec 2018 00:33:56 -0800 (PST) X-Google-Smtp-Source: AFSGD/V8ep8nmKk4FBaxMwkgYbSTLSzPAZpDRM5zhwzuQgb0Qt//26CbAmHSYxKjJGVeLuZ4fCVi X-Received: by 2002:a65:514c:: with SMTP id g12mr9309271pgq.169.1545294836583; Thu, 20 Dec 2018 00:33:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545294836; cv=none; d=google.com; s=arc-20160816; b=ni428pi3BdEuxJa5kvzgj+tQMVQQCPMVR8kmtdGDg5Xl6n2x40zPXkdi3oNM0v7a2z rymk+/T7oDh8pBhJy9FvRH9EZvPOntzVESLvZAbujRGIe58EmHnBLC/9UF2m+ttuAhFZ FcGv03luKahA+boSNOVRX8EJORyAy00pt8HHmFG9/YMriqW9h81UvNpiDzB0QdDzxNbe giV8bpwWYgR9uYxbCiJ2r5VMQY3WdnJeZ6q1/RalaHTIlFrZ5M6hUJ0rbESb0nhskgPx 8y3jBfsWqNLJgbV8wnJhtki7o0BBPnCiKf7vzlmZRHW/6FEou3B/Tzu7G2pJCQTLPphu oQGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Sxlzhyh7JEfFQeh99mE4La6iNqR97+GLJeQv0FItYuY=; b=mFAWflNbcIoE4rwlmKqvYujsXGdllkCIuxPw2IT7woAbb780mMpQ1c3ak67lZBkK+l o8fZ1URBk0tynrdnuPo9kyv7rxfVjpbKvrlBDKVvTCGfAwGOMR1fWu7LRO6urDyQDHiJ YLH2EY6xYkmyWEj3zocHB2mYABKmyMzTHH8z5vEnrggH4JosaOys8vJH7y/oUj2wfIKX mh4Jp8NxD+snT9kcHgwlYj6xvC9DbK6M6muV54AvZDR9aBDjbCk6VRPlu2q8IF2h/COI rmyy4FT82y1ONyYeyY2HbZ+vtWYTR4CJNjnFso7mYTvjfjqadjln6J+qOlk8iJIb0Gdb +79w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a28si1926703pgl.530.2018.12.20.00.33.40; Thu, 20 Dec 2018 00:33:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728527AbeLTI0J (ORCPT + 99 others); Thu, 20 Dec 2018 03:26:09 -0500 Received: from ozlabs.ru ([107.173.13.209]:54238 "EHLO ozlabs.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730882AbeLTIYT (ORCPT ); Thu, 20 Dec 2018 03:24:19 -0500 Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id D3A1CAE802F9; Thu, 20 Dec 2018 03:24:15 -0500 (EST) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , David Gibson , kvm-ppc@vger.kernel.org, kvm@vger.kernel.org, Alistair Popple , Reza Arbab , Sam Bobroff , Piotr Jaroszynski , =?UTF-8?q?Leonardo=20Augusto=20Guimar=C3=A3es=20Garcia?= , Jose Ricardo Ziviani , Daniel Henrique Barboza , Alex Williamson , Paul Mackerras , linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH kernel v7 06/20] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation Date: Thu, 20 Dec 2018 19:23:36 +1100 Message-Id: <20181220082350.58113-7-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181220082350.58113-1-aik@ozlabs.ru> References: <20181220082350.58113-1-aik@ozlabs.ru> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We might have memory@ nodes with "linux,usable-memory" set to zero (for example, to replicate powernv's behaviour for GPU coherent memory) which means that the memory needs an extra initialization but since it can be used afterwards, the pseries platform will try mapping it for DMA so the DMA window needs to cover those memory regions too; if the window cannot cover new memory regions, the memory onlining fails. This walks through the memory nodes to find the highest RAM address to let a huge DMA window cover that too in case this memory gets onlined later. Signed-off-by: Alexey Kardashevskiy --- Changes: v4: * uses of_read_number directly instead of cut-n-pasted read_n_cells --- arch/powerpc/platforms/pseries/iommu.c | 33 +++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c index 06f0296..cbcc8ce 100644 --- a/arch/powerpc/platforms/pseries/iommu.c +++ b/arch/powerpc/platforms/pseries/iommu.c @@ -964,6 +964,37 @@ struct failed_ddw_pdn { static LIST_HEAD(failed_ddw_pdn_list); +static phys_addr_t ddw_memory_hotplug_max(void) +{ + phys_addr_t max_addr = memory_hotplug_max(); + struct device_node *memory; + + for_each_node_by_type(memory, "memory") { + unsigned long start, size; + int ranges, n_mem_addr_cells, n_mem_size_cells, len; + const __be32 *memcell_buf; + + memcell_buf = of_get_property(memory, "reg", &len); + if (!memcell_buf || len <= 0) + continue; + + n_mem_addr_cells = of_n_addr_cells(memory); + n_mem_size_cells = of_n_size_cells(memory); + + /* ranges in cell */ + ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells); + + start = of_read_number(memcell_buf, n_mem_addr_cells); + memcell_buf += n_mem_addr_cells; + size = of_read_number(memcell_buf, n_mem_size_cells); + memcell_buf += n_mem_size_cells; + + max_addr = max_t(phys_addr_t, max_addr, start + size); + } + + return max_addr; +} + /* * If the PE supports dynamic dma windows, and there is space for a table * that can map all pages in a linear offset, then setup such a table, @@ -1053,7 +1084,7 @@ static u64 enable_ddw(struct pci_dev *dev, struct device_node *pdn) } /* verify the window * number of ptes will map the partition */ /* check largest block * page size > max memory hotplug addr */ - max_addr = memory_hotplug_max(); + max_addr = ddw_memory_hotplug_max(); if (query.largest_available_block < (max_addr >> page_shift)) { dev_dbg(&dev->dev, "can't map partition max 0x%llx with %u " "%llu-sized pages\n", max_addr, query.largest_available_block, -- 2.17.1