Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp1078967ybe; Fri, 6 Sep 2019 11:31:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqzHLTLbRu9ccOBFToNnZH47NBqxU7zI5DJ0hiogyBpt+N98PLG/DKnUHTjRhdBsd74cUzJY X-Received: by 2002:a17:902:8bc7:: with SMTP id r7mr2290635plo.85.1567794679159; Fri, 06 Sep 2019 11:31:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567794679; cv=none; d=google.com; s=arc-20160816; b=YOm+EOg99gALmjGfU5iKWvUxejRUkRJPjD64F6/VU/NdWAvQBSA8OQ/LVUQggLuD2C OcsvYJo64UjOnSEcyT1eOEHGLBBsZDSD+a7Qx/owkrzV+QEMURovE46M+gygMflrnr6q guAS2i1r5EVvIPl/oIwlvnWYZvckcGmoEWnMbJzvq2JzmYa8FVfjbCVR302GlwhOP1c2 5IlgwhtDMK8aifW0g7/JcimRzs2R8LDKzmCId8us5BxTv7dSeFkCapInCXfxkI7YmFD1 S2+HfaTVAxqYrlTkQbMDlqqNc2kAwoRDb+euXBlLZDZZvVUv3FD1Z49QsfERagslclhu 0Y8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=r+OI/chsUY6eZGNJ+8t4xsMu3XuPBrhiAosyeaC8JvM=; b=E0OEfRMb9HgyIJnDkh6H+BPziVTkozslD2XkqXDBjsPSR/hFKykwVW7bIvv+0UwH3N yPS2CzBOOXcz1jg8g5zyZlu85RBH4ai2BAR8d2rp7DnOVyu/SVz3Vcjql2+5lAnF7+2h x6fl0bIxWwEaia571hOz2q5Ky3mS0i1xetbX0EB5vE2kEHOL+3FGQkxtTm+Z6L/KBvL9 hK1XgJCw4IhAgPOrjtpJyI7+APNjQSstURDXmjUR6pWBuCllv79ItUOEsIukUv0Oe1Gm 0q0QHnqX40H5QL8oRwLWlDHFhUl080Rqmg+zOEp4V3bWs+TSCkrTOOal1TJ1z7/6Wmxg TB/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z18si6921036pfk.120.2019.09.06.11.31.02; Fri, 06 Sep 2019 11:31:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392716AbfIFMmz (ORCPT + 99 others); Fri, 6 Sep 2019 08:42:55 -0400 Received: from foss.arm.com ([217.140.110.172]:55732 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392067AbfIFMmz (ORCPT ); Fri, 6 Sep 2019 08:42:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 629511570; Fri, 6 Sep 2019 05:42:54 -0700 (PDT) Received: from [10.1.196.133] (e112269-lin.cambridge.arm.com [10.1.196.133]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 77D963F718; Fri, 6 Sep 2019 05:42:53 -0700 (PDT) Subject: Re: [PATCH] drm/panfrost: Prevent race when handling page fault To: Rob Herring Cc: Tomeu Vizoso , David Airlie , "linux-kernel@vger.kernel.org" , dri-devel , Alyssa Rosenzweig References: <20190905121141.42820-1-steven.price@arm.com> From: Steven Price Message-ID: Date: Fri, 6 Sep 2019 13:42:51 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/09/2019 12:10, Rob Herring wrote: > On Thu, Sep 5, 2019 at 1:11 PM Steven Price wrote: >> >> When handling a GPU page fault addr_to_drm_mm_node() is used to >> translate the GPU address to a buffer object. However it is possible for >> the buffer object to be freed after the function has returned resulting >> in a use-after-free of the BO. >> >> Change addr_to_drm_mm_node to return the panfrost_gem_object with an >> extra reference on it, preventing the BO from being freed until after >> the page fault has been handled. >> >> Signed-off-by: Steven Price >> --- >> >> I've managed to trigger this, generating the following stack trace. > > Humm, the assumption was that a fault could only happen during a job > and so a reference would already be held. Otherwise, couldn't the GPU > also be accessing the BO after it is freed? Ah, I guess I missed that in the commit message. This is assuming that user space doesn't include the BO in the job even though the GPU then does try to access it. AIUI mesa wouldn't do this, but this is still easily possible if user space wants to crash the kernel. > Also, looking at this again, I think we need to hold the mm_lock > around the drm_mm_for_each_node(). Ah, good point. Squashing in the following should do the trick: ----8<---- diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index ccc536d2e489..41f297aa259c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -396,26 +396,33 @@ static struct panfrost_gem_object * addr_to_drm_mm_node(struct panfrost_device *pfdev, int as, u64 addr) { struct panfrost_gem_object *bo = NULL; + struct panfrost_file_priv *priv; struct drm_mm_node *node; u64 offset = addr >> PAGE_SHIFT; struct panfrost_mmu *mmu; spin_lock(&pfdev->as_lock); list_for_each_entry(mmu, &pfdev->as_lru_list, list) { - struct panfrost_file_priv *priv; - if (as != mmu->as) - continue; + if (as == mmu->as) + break; + } + if (as != mmu->as) + goto out; - priv = container_of(mmu, struct panfrost_file_priv, mmu); - drm_mm_for_each_node(node, &priv->mm) { - if (offset >= node->start && - offset < (node->start + node->size)) { - bo = drm_mm_node_to_panfrost_bo(node); - drm_gem_object_get(&bo->base.base); - goto out; - } + priv = container_of(mmu, struct panfrost_file_priv, mmu); + + spin_lock(&priv->mm_lock); + + drm_mm_for_each_node(node, &priv->mm) { + if (offset >= node->start && + offset < (node->start + node->size)) { + bo = drm_mm_node_to_panfrost_bo(node); + drm_gem_object_get(&bo->base.base); + break; } } + + spin_unlock(&priv->mm_lock); out: spin_unlock(&pfdev->as_lock); return bo;