Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp878970pxk; Thu, 17 Sep 2020 20:08:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzCMOMPE2jI7WDPRNOH6Q2K4DiSTmxAxmKu2DylBkk/OacfKdf9X+MBLw+ac2lRIc8TKPI1 X-Received: by 2002:a17:906:7143:: with SMTP id z3mr32807650ejj.361.1600398512568; Thu, 17 Sep 2020 20:08:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600398512; cv=none; d=google.com; s=arc-20160816; b=IW7xbtoTLdZzH+Dq9Lj5t143rndz6RNHtDt+RXbm9owXJ3xiEl4JFON8p3gUPiH4AY UG9bI/q/g2mE00gvqCL01GD7Kw0qaXwIT2jXlwzeHrQJHbuuVxSH7JIYtFMl78A0gGWQ blCe8D2px3K0KukqRhYwmHMDKpc3U9hfQwzuLVj3yAonwqegChKWFgh7WblM0CwhQWzP XJ+M/drLIDIwxOY5YpymN/4ZwB+X3AF/p2VJl+ZHNON3AGoLFPxVC1FkyCDkZ3Yiq4MW yg+poR+nb/EfALuMtVU781rPPkLicS4G7i2B0fMWdMMU6gDgJoK40ccvXVBdRr6sEkNc MfJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SNm11bcth+Hqb9mSXCirCUbECApCxaKqhCN993avvII=; b=RUxR8kWB3t+CjTmu7T01B2IVkQhQ1wJgWmPeH2fBuNuJ2UFwKprwRZVw+vdCgFkRzH 0NxaNBzt4FOzoHVDxD/OzwwvVumtpsvQA1KipNdGOTeq/6E4VnwkwL0WssdN1ihJtBwy ICbB/eY3yjRwH/Ll3xeZgz/zPtq7+FOfmhKjSGAnHRs/sDR8AMEj4EgU2I3fJbp/A2np bA97UnldFIBuOTXC9iRGaY0t05jJw1u3Xq8ioFJGiJsNf0U/PB+fmBwF7uGZVRjmMZIc wvJY8mAUK3lOAO5eQx+yZoj9xCHBTPylLEBLLnhlqg8QPDeOQ6hgSyZ5UOwz0q2DxHen behg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=P2+kTU0d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e5si1265114ejk.275.2020.09.17.20.08.09; Thu, 17 Sep 2020 20:08:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=P2+kTU0d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728088AbgIRDF4 (ORCPT + 99 others); Thu, 17 Sep 2020 23:05:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:51534 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727262AbgIRCES (ORCPT ); Thu, 17 Sep 2020 22:04:18 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6ED2423787; Fri, 18 Sep 2020 02:04:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600394657; bh=O5UZ/5EyTSvCwoT/JNeDHA7crSRj5y5CGSOTMxMfXvw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P2+kTU0dWIN1+ZgN13cOEZffQDMaLFpbG1y+pQ2KAFJJTN5EecCMctWTbasIUB3yn a4vKAz2hKGQYZF+WdymEPcg+uTUJjOUf1mgc90RY3hRiuk+KZR3hVlFbMbm+uS+tkK K1A/9J8OssdoIyy2iQ5eiHJKSP6aKt++f1i5Eumk= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Alexey Kardashevskiy , Jan Kara , Michael Ellerman , Sasha Levin , linuxppc-dev@lists.ozlabs.org Subject: [PATCH AUTOSEL 5.4 153/330] powerpc/book3s64: Fix error handling in mm_iommu_do_alloc() Date: Thu, 17 Sep 2020 21:58:13 -0400 Message-Id: <20200918020110.2063155-153-sashal@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200918020110.2063155-1-sashal@kernel.org> References: <20200918020110.2063155-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alexey Kardashevskiy [ Upstream commit c4b78169e3667413184c9a20e11b5832288a109f ] The last jump to free_exit in mm_iommu_do_alloc() happens after page pointers in struct mm_iommu_table_group_mem_t were already converted to physical addresses. Thus calling put_page() on these physical addresses will likely crash. This moves the loop which calculates the pageshift and converts page struct pointers to physical addresses later after the point when we cannot fail; thus eliminating the need to convert pointers back. Fixes: eb9d7a62c386 ("powerpc/mm_iommu: Fix potential deadlock") Reported-by: Jan Kara Signed-off-by: Alexey Kardashevskiy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20191223060351.26359-1-aik@ozlabs.ru Signed-off-by: Sasha Levin --- arch/powerpc/mm/book3s64/iommu_api.c | 39 +++++++++++++++------------- 1 file changed, 21 insertions(+), 18 deletions(-) diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 56cc845205779..ef164851738b8 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -121,24 +121,6 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, goto free_exit; } - pageshift = PAGE_SHIFT; - for (i = 0; i < entries; ++i) { - struct page *page = mem->hpages[i]; - - /* - * Allow to use larger than 64k IOMMU pages. Only do that - * if we are backed by hugetlb. - */ - if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) - pageshift = page_shift(compound_head(page)); - mem->pageshift = min(mem->pageshift, pageshift); - /* - * We don't need struct page reference any more, switch - * to physical address. - */ - mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT; - } - good_exit: atomic64_set(&mem->mapped, 1); mem->used = 1; @@ -158,6 +140,27 @@ good_exit: } } + if (mem->dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) { + /* + * Allow to use larger than 64k IOMMU pages. Only do that + * if we are backed by hugetlb. Skip device memory as it is not + * backed with page structs. + */ + pageshift = PAGE_SHIFT; + for (i = 0; i < entries; ++i) { + struct page *page = mem->hpages[i]; + + if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) + pageshift = page_shift(compound_head(page)); + mem->pageshift = min(mem->pageshift, pageshift); + /* + * We don't need struct page reference any more, switch + * to physical address. + */ + mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT; + } + } + list_add_rcu(&mem->next, &mm->context.iommu_group_mem_list); mutex_unlock(&mem_list_mutex); -- 2.25.1