Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp1195136ybn; Wed, 2 Oct 2019 12:15:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqyH0LulZaSDNoGo51ox+K7g0kOTKDIGY5N8COVg+76CwEltRcLOeyJlAsxe3AjZQr56TYaX X-Received: by 2002:a50:934c:: with SMTP id n12mr5692081eda.12.1570043727174; Wed, 02 Oct 2019 12:15:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570043727; cv=none; d=google.com; s=arc-20160816; b=wPuARw6dvjT5OkOmxqyBWGz6HB182yUMJSo/wITJQG+LqrB8kwsClFzcn68AW/P4x0 eTq3hc5+ayehp211b1+ChZa7slKTW0PqvG150rwcM4SC0xJWxTXb5HOmXg25d6XN0nhP WekVwuEch5CrCkLzq4iVOe8hj8oClF4WUq3aR9kSMxA7aeCfBJV5XjFfBM2sOXtvjRqX jqOjf42VBOspZJKbDx896NHRWY06lP1/btF5e02ub78SId/+Lgfx0e/jPmL7hZfeoWWH +B4digFCSfTghFyU8C9f+zhbdJtpphMaexUsMEqIz7aKCQgKJKsFFVKxE+4Ck0xsvhS0 zh5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition; bh=aAu1e7LLHoa04bJbTAg+rp9+me9QtKzwTPjT6WYynso=; b=0t3vGDGoG3vL089vAz2LMzkGH2uZBDi5IjSnVKJtngNFTCRMHK7IpCOXXpc4rNbfsX L2jz7zumAEds4/c6ahn+BMCflrp2vKeoypDgKOgd2VSSlaSdfv0ntNa8RimPXR3yxANh gDfVp+KzuKV+UpFSS1Y/ipwUht7wgOsCYAy9XbHHDAyyYdeeo77p/rEx968RX/96hKyz ejO7DKlBn8RKwQ49SFAtcwTMjniVV4cJ8UOScBWY81rYUupkAqgnXe8WiTTU7QIv58w/ y8lB0BuR3oiWgwdeqXCmEqKr2+DfMCqcYAAvmXUteIXngQ+ACy6rEtDOHPKsTGA2oTHX fpcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c21si2518ejx.295.2019.10.02.12.15.01; Wed, 02 Oct 2019 12:15:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729511AbfJBTLf (ORCPT + 99 others); Wed, 2 Oct 2019 15:11:35 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:35612 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729214AbfJBTIM (ORCPT ); Wed, 2 Oct 2019 15:08:12 -0400 Received: from [192.168.4.242] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iFjyr-00036H-Pg; Wed, 02 Oct 2019 20:08:09 +0100 Received: from ben by deadeye with local (Exim 4.92.1) (envelope-from ) id 1iFjyo-0003eG-RI; Wed, 02 Oct 2019 20:08:06 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, Denis Kirjanov , "Marc Zyngier" , "Dave Martin" , "Andrew Jones" Date: Wed, 02 Oct 2019 20:06:51 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) X-Patchwork-Hint: ignore Subject: [PATCH 3.16 55/87] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST In-Reply-To: X-SA-Exim-Connect-IP: 192.168.4.242 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.75-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Dave Martin commit df205b5c63281e4f32caac22adda18fd68795e80 upstream. Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs that do not correspond to a single underlying architectural register. KVM_GET_REG_LIST was not changed to match however: instead, it simply yields a list of 32-bit register IDs that together cover the whole kvm_regs struct. This means that if userspace tries to use the resulting list of IDs directly to drive calls to KVM_*_ONE_REG, some of those calls will now fail. This was not the intention. Instead, iterating KVM_*_ONE_REG over the list of IDs returned by KVM_GET_REG_LIST should be guaranteed to work. This patch fixes the problem by splitting validate_core_offset() into a backend core_reg_size_from_offset() which does all of the work except for checking that the size field in the register ID matches, and kvm_arm_copy_reg_indices() and num_core_regs() are converted to use this to enumerate the valid offsets. kvm_arm_copy_reg_indices() now also sets the register ID size field appropriately based on the value returned, so the register ID supplied to userspace is fully qualified for use with the register access ioctls. Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from userspace") Signed-off-by: Dave Martin Reviewed-by: Andrew Jones Tested-by: Andrew Jones Signed-off-by: Marc Zyngier [bwh: Backported to 3.16: - Don't add unused vcpu parameter - Adjust context] Signed-off-by: Ben Hutchings --- --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -46,9 +46,8 @@ static u64 core_reg_offset_from_id(u64 i return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } -static int validate_core_offset(const struct kvm_one_reg *reg) +static int core_reg_size_from_offset(u64 off) { - u64 off = core_reg_offset_from_id(reg->id); int size; switch (off) { @@ -78,13 +77,26 @@ static int validate_core_offset(const st return -EINVAL; } - if (KVM_REG_SIZE(reg->id) == size && - IS_ALIGNED(off, size / sizeof(__u32))) - return 0; + if (IS_ALIGNED(off, size / sizeof(__u32))) + return size; return -EINVAL; } +static int validate_core_offset(const struct kvm_one_reg *reg) +{ + u64 off = core_reg_offset_from_id(reg->id); + int size = core_reg_size_from_offset(off); + + if (size < 0) + return -EINVAL; + + if (KVM_REG_SIZE(reg->id) != size) + return -EINVAL; + + return 0; +} + static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { /* @@ -197,10 +209,33 @@ unsigned long kvm_arm_num_regs(struct kv int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) { unsigned int i; - const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { - if (put_user(core_reg | i, uindices)) + u64 reg = KVM_REG_ARM64 | KVM_REG_ARM_CORE | i; + int size = core_reg_size_from_offset(i); + + if (size < 0) + continue; + + switch (size) { + case sizeof(__u32): + reg |= KVM_REG_SIZE_U32; + break; + + case sizeof(__u64): + reg |= KVM_REG_SIZE_U64; + break; + + case sizeof(__uint128_t): + reg |= KVM_REG_SIZE_U128; + break; + + default: + WARN_ON(1); + continue; + } + + if (put_user(reg, uindices)) return -EFAULT; uindices++; }