Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3560558pxb; Mon, 24 Jan 2022 12:14:11 -0800 (PST) X-Google-Smtp-Source: ABdhPJxNyBRrXylDdoKAbhaPE+cRFVViuYYmDM6WXLZ0XLSHe20Kxh6dEt5fNBlgMllmOH8Pc5du X-Received: by 2002:a17:902:a501:b0:14a:4ed2:2a01 with SMTP id s1-20020a170902a50100b0014a4ed22a01mr15918299plq.49.1643055251633; Mon, 24 Jan 2022 12:14:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643055251; cv=none; d=google.com; s=arc-20160816; b=EY4osAmskQDOFLMoKDf1lWKUWI/etc2CferTvIOm6EGQ8s1Hfc6BnZ2MlYmvYk3FaW RU/BaRMlxTij/Kvrzk52JBxNlYt/b5W0HZ5VQbQ/SRIK7a7MiFlRTGH0BlUnW9KKAgIL Liq9MTmFsBoTCaGLbaXXrAYhOL+dKkP0H2fsJdTQPjUAhnQ4wft3fBrUdqh/ntvsJCPh Kre+cQ469JZXy2+zyhyFZ3rClCOogkDw8IMm8btqB3Iy0PgRES5du6+UWmG1JmE0scpL F3txnp7LkOlqakCyHUUebRZeN6urBdXdtozKr7f/q5BsZZXEm1LL6D8o2xx+sQOlClgB 2EMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=bS0YRWeX244ueqcgBHHktc1Z00waz6Vjr+mDWTO4JMU=; b=dVBC+OTQ42y26kPEiPlgAAAIAS6r4hppOF1cUP8C3tm2Kz7pNdOcvYuL+sIWjqaTJd 3KOZ0xmAI8gIxor8+BrNwhAQl9buG7f/LRm3hZiwC8H2aW0gcG5C+Y2BP2edlnHyayi3 JDjzJu/g96DLHJDACZVR/vADXA8fCSlf5ABr5n0zvGR+VFkZ3m7++Ab4cFAmh2VrPpzY mYEOYseecFiIa8Wpoji9fV5dFKQO/oYohB69kek0OIMp2Gq2zXVfw36aap7Pp8n5WNE4 4xZ06lsdA08IszRbmy1uRFOI0ijfi4fBHwfIbFpwjeRgjlmI56+xyZbKBzDALInfRnpo xW4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=Q6e4S11e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mu12si635480pjb.0.2022.01.24.12.13.58; Mon, 24 Jan 2022 12:14:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=Q6e4S11e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344602AbiAXTRR (ORCPT + 99 others); Mon, 24 Jan 2022 14:17:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345122AbiAXTJL (ORCPT ); Mon, 24 Jan 2022 14:09:11 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E948C08C5C5; Mon, 24 Jan 2022 11:02:09 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DF5E8B8119D; Mon, 24 Jan 2022 19:02:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1933BC340E5; Mon, 24 Jan 2022 19:02:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1643050926; bh=aDFoXBfXTkfoNdAUutnNPz0dvkoSMcKTMdxDWksInsE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q6e4S11eXMhSJCVmeaL4dOJIAAQf+7vvXJQZ4/EQCXTxvCZVTM9LWWZyHh0fENWqD /CXwRqw2rrTGfxCFsO/aV91cA6q7xz23a1pW6TJmduy7b/4ZJAJ+DfCslK4bO8QR47 5RZzFxA4QCkBiy1VDDs3foHhsI/AHC6g/8GPnoTw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Nicholas Piggin , Paolo Bonzini , Ben Hutchings Subject: [PATCH 4.9 157/157] KVM: do not allow mapping valid but non-reference-counted pages Date: Mon, 24 Jan 2022 19:44:07 +0100 Message-Id: <20220124183937.735989252@linuxfoundation.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220124183932.787526760@linuxfoundation.org> References: <20220124183932.787526760@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Nicholas Piggin commit f8be156be163a052a067306417cd0ff679068c97 upstream. It's possible to create a region which maps valid but non-refcounted pages (e.g., tail pages of non-compound higher order allocations). These host pages can then be returned by gfn_to_page, gfn_to_pfn, etc., family of APIs, which take a reference to the page, which takes it from 0 to 1. When the reference is dropped, this will free the page incorrectly. Fix this by only taking a reference on valid pages if it was non-zero, which indicates it is participating in normal refcounting (and can be released with put_page). This addresses CVE-2021-22543. Signed-off-by: Nicholas Piggin Tested-by: Paolo Bonzini Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- virt/kvm/kvm_main.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1513,6 +1513,13 @@ static bool vma_is_valid(struct vm_area_ return true; } +static int kvm_try_get_pfn(kvm_pfn_t pfn) +{ + if (kvm_is_reserved_pfn(pfn)) + return 1; + return get_page_unless_zero(pfn_to_page(pfn)); +} + static int hva_to_pfn_remapped(struct vm_area_struct *vma, unsigned long addr, bool *async, bool write_fault, bool *writable, @@ -1562,13 +1569,21 @@ static int hva_to_pfn_remapped(struct vm * Whoever called remap_pfn_range is also going to call e.g. * unmap_mapping_range before the underlying pages are freed, * causing a call to our MMU notifier. + * + * Certain IO or PFNMAP mappings can be backed with valid + * struct pages, but be allocated without refcounting e.g., + * tail pages of non-compound higher order allocations, which + * would then underflow the refcount when the caller does the + * required put_page. Don't allow those pages here. */ - kvm_get_pfn(pfn); + if (!kvm_try_get_pfn(pfn)) + r = -EFAULT; out: pte_unmap_unlock(ptep, ptl); *p_pfn = pfn; - return 0; + + return r; } /*