Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1099625rwd; Wed, 31 May 2023 09:26:33 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4KJ8p92/OIQawGSyne94F317LF343fS4QvQDCycJXerulS5JXHwRnbtML71JDH307oAEye X-Received: by 2002:a05:6358:3403:b0:125:908e:22c9 with SMTP id h3-20020a056358340300b00125908e22c9mr2861593rwd.10.1685550393017; Wed, 31 May 2023 09:26:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685550392; cv=none; d=google.com; s=arc-20160816; b=PbclVFd5zzfcPTnddI8VLt9O4OOTLQAUZPYo1u+KZal6bdfWJAXQZk3SUx5IWhjuRK KtEG65D8u6wE2Sj8Cn9m9jvj0lUOkfJqQiLWBFD0vXbNaPjafdhaRkYiTg3nfT9veZJJ TbImVVnUsc47r1PbTmJTui+MJINyvLB2b5eYIF6DkAuTATt7Z1uY2e84nhXxYWtZDr8R fVuZqd8fQWl/6KIX6XpZaT56lKk9q3Vf4TyxW+/+8honHl88v3CXYlj5xpcBNkW0mhWe RPHZQe2ZqmtyRntgYocHmhO7D+/SVAUvC/E9PeMeZdwP2WC3l73nxE8DIZvtls8XXAkP AYbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=KZrnDzqF0kg5SP8WPls5NoRBsrNhlVAy/UWdXhsE2RQ=; b=DRiocc5HuCXZ3pszBx/SewC3AaWhwQdfGVYVyWSu3335ubciBKhCwn5ubXx0ynU2C8 vII8znJ2ULa3mJLfe3sGgYaaIgaODIwFeZyD+GH04pa9vhV1+1Sh80dt/NQZkJWU4TdX AW+INTfCy46tDZo84kGb9KQmG7Es1r8t5V1zIvZohDtbfdz5eUhS5SPInxOb4axis9QW nNEueqYqvbeJY1CjAEU7P0eg8hgnzkJtM0tTMjYMiD9QF8gQB2hNYxHJNAFb+NLJf1OV vZMdIGcjyKnPWboA73m+ZVxEIxboDzN+31SzIO5IoilEBS5INkbb9F2E/WlRyA5l2oqv 90nA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aonvpQLd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nb1-20020a17090b35c100b00250365255b9si1217179pjb.116.2023.05.31.09.26.19; Wed, 31 May 2023 09:26:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=aonvpQLd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233893AbjEaPyE (ORCPT + 99 others); Wed, 31 May 2023 11:54:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233081AbjEaPyD (ORCPT ); Wed, 31 May 2023 11:54:03 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FB2D9D for ; Wed, 31 May 2023 08:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685548395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KZrnDzqF0kg5SP8WPls5NoRBsrNhlVAy/UWdXhsE2RQ=; b=aonvpQLdDOk2lf5605KkbVhF30T6OWZHMbogpXxGMPbTyFR4V44MZ3GR57TDecLpcxVAT2 6pB3sHp/ADHoPscKE/IpmMuFxVfkXTdEaueTGIK6mHSODoFKW+KfXiAK8zcbxU7w7XPGuq UJQnR0CLCvcpV8OPFrKvG0C/ngAWkWY= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-155-vxa5Oe_GOT-JX3q7_IohfA-1; Wed, 31 May 2023 11:53:14 -0400 X-MC-Unique: vxa5Oe_GOT-JX3q7_IohfA-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-6262360e603so27997526d6.2 for ; Wed, 31 May 2023 08:53:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685548394; x=1688140394; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=KZrnDzqF0kg5SP8WPls5NoRBsrNhlVAy/UWdXhsE2RQ=; b=WScx1irLVk0aYCKOoq8hLMV0fYAM/piLbvIznPrvzGqkHWHYCUqymwvQ61C+23jbMW 6f9WTKDF9UbLmpMtEPyWBAqoQHNcl69GrkJeU26ysPvAma87tWH40JEw20y+iDYpE0V9 BmjaDqLglzSjI4zrabfj7CViUlxZDFB90arE3W52a0yh4CEF1gcpNk4Oppp5YWwurcN/ akXBtF1+1ScY3Ef0pMyPfRKs1adXLSRPu4WiebF9GLIGO3MRFwGEQd63keYLrJFMfV/G mxInrks5btkwW4SPo6F0hzyX04a2KSED2TRORpUpOQ+foJozoy46rEM48lOSELFNOCxY kfCg== X-Gm-Message-State: AC+VfDwnGhb0I0L2LsL/X4+E1By1KXo+A9qNi6Q+ZEm5IZnhEBmf0gQm G84Qf1qPKJfo9Z02pD7mspnTtOzhPa6U3PzZdNJo49959Uts/s+3m5VfA6IdzvXCmX+iTriU4pr Tk5B8xt3UP/AGmXRglSbSNndj X-Received: by 2002:a05:6214:1ccd:b0:625:76bb:b25e with SMTP id g13-20020a0562141ccd00b0062576bbb25emr5341609qvd.20.1685548393956; Wed, 31 May 2023 08:53:13 -0700 (PDT) X-Received: by 2002:a05:6214:1ccd:b0:625:76bb:b25e with SMTP id g13-20020a0562141ccd00b0062576bbb25emr5341593qvd.20.1685548393599; Wed, 31 May 2023 08:53:13 -0700 (PDT) Received: from localhost (ip98-179-76-75.ph.ph.cox.net. [98.179.76.75]) by smtp.gmail.com with ESMTPSA id jy11-20020a0562142b4b00b00626267cad1esm2867990qvb.93.2023.05.31.08.53.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 May 2023 08:53:13 -0700 (PDT) Date: Wed, 31 May 2023 08:53:11 -0700 From: Jerry Snitselaar To: Jarkko Sakkinen Cc: linux-integrity@vger.kernel.org, Jason Gunthorpe , Alejandro Cabrera , Jarkko Sakkinen , stable@vger.kernel.org, Stefan Berger , linux-kernel@vger.kernel.org Subject: Re: [PATCH] tpm: factor out the user space mm from tpm_vtpm_set_locality() Message-ID: <4wa3nb6h6ns3msp4o6wojhxwn37t5sxlgexacm7ga7rfcrnsfl@chr5qhednjdl> References: <20230530205001.1302975-1-jarkko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230530205001.1302975-1-jarkko@kernel.org> X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 30, 2023 at 11:50:01PM +0300, Jarkko Sakkinen wrote: > From: Jarkko Sakkinen > > vtpm_proxy_fops_set_locality() causes kernel buffers to be passed to > copy_from_user() and copy_to_user(). > > Factor out the crippled code away with help of an internal API for > managing struct proxy_dev instances. > > Link: https://lore.kernel.org/all/20230530020133.235765-1-jarkko@kernel.org/ > Cc: stable@vger.kernel.org # v4.14+ > Fixes: be4c9acfe297 ("tpm: vtpm_proxy: Implement request_locality function.") > Signed-off-by: Jarkko Sakkinen > --- > I've tested this on RISC-V QEMU and have not noticed issues so far, and > thus dropped the RFC tag. I've been looking into the code a lot lately > as I'm building a boot time support for it, which will allow to e.g. > test IMA without a physical TPM. > drivers/char/tpm/tpm_vtpm_proxy.c | 168 ++++++++++++++++-------------- > 1 file changed, 87 insertions(+), 81 deletions(-) > > diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c > index 30e953988cab..83496742cc19 100644 > --- a/drivers/char/tpm/tpm_vtpm_proxy.c > +++ b/drivers/char/tpm/tpm_vtpm_proxy.c > @@ -38,7 +38,6 @@ struct proxy_dev { > #define STATE_OPENED_FLAG BIT(0) > #define STATE_WAIT_RESPONSE_FLAG BIT(1) /* waiting for emulator response */ > #define STATE_REGISTERED_FLAG BIT(2) > -#define STATE_DRIVER_COMMAND BIT(3) /* sending a driver specific command */ > > size_t req_len; /* length of queued TPM request */ > size_t resp_len; /* length of queued TPM response */ > @@ -58,6 +57,53 @@ static void vtpm_proxy_delete_device(struct proxy_dev *proxy_dev); > * Functions related to 'server side' > */ > > +static ssize_t __vtpm_proxy_copy_from(struct proxy_dev *proxy_dev, u8 *buf, size_t count) > +{ > + size_t len = proxy_dev->req_len; > + > + if (!(proxy_dev->state & STATE_OPENED_FLAG)) > + return -EPIPE; > + > + if (count < len || len > sizeof(proxy_dev->buffer)) { > + pr_debug("Invalid size in recv: count=%zd, req_len=%zd\n", count, len); > + return -EIO; > + } > + > + memcpy(buf, proxy_dev->buffer, len); > + memset(proxy_dev->buffer, 0, len); > + proxy_dev->req_len = 0; > + proxy_dev->state |= STATE_WAIT_RESPONSE_FLAG; > + > + return len; > +} > + > +static ssize_t __vtpm_proxy_copy_to(struct proxy_dev *proxy_dev, const u8 *buf, size_t count) > +{ > + if (!(proxy_dev->state & STATE_OPENED_FLAG)) > + return -EPIPE; > + > + if (count > sizeof(proxy_dev->buffer) || !(proxy_dev->state & STATE_WAIT_RESPONSE_FLAG)) > + return -EIO; > + > + proxy_dev->state &= ~STATE_WAIT_RESPONSE_FLAG; > + proxy_dev->req_len = 0; > + proxy_dev->resp_len = count; > + memcpy(proxy_dev->buffer, buf, count); > + > + return count; > +} > + > +static int vtpm_proxy_wait_for(struct proxy_dev *proxy_dev) > +{ > + if (wait_event_interruptible( > + proxy_dev->wq, > + proxy_dev->req_len != 0 || !(proxy_dev->state & STATE_OPENED_FLAG))) > + return -EINTR; > + > + return 0; > +} > + > + > /** > * vtpm_proxy_fops_read - Read TPM commands on 'server side' > * > @@ -73,44 +119,26 @@ static ssize_t vtpm_proxy_fops_read(struct file *filp, char __user *buf, > size_t count, loff_t *off) > { > struct proxy_dev *proxy_dev = filp->private_data; > - size_t len; > - int sig, rc; > + u8 *kernel_buf; > + ssize_t rc; > > - sig = wait_event_interruptible(proxy_dev->wq, > - proxy_dev->req_len != 0 || > - !(proxy_dev->state & STATE_OPENED_FLAG)); > - if (sig) > - return -EINTR; > - > - mutex_lock(&proxy_dev->buf_lock); > - > - if (!(proxy_dev->state & STATE_OPENED_FLAG)) { > - mutex_unlock(&proxy_dev->buf_lock); > - return -EPIPE; > - } > - > - len = proxy_dev->req_len; > - > - if (count < len || len > sizeof(proxy_dev->buffer)) { > - mutex_unlock(&proxy_dev->buf_lock); > - pr_debug("Invalid size in recv: count=%zd, req_len=%zd\n", > - count, len); > - return -EIO; > - } > - > - rc = copy_to_user(buf, proxy_dev->buffer, len); > - memset(proxy_dev->buffer, 0, len); > - proxy_dev->req_len = 0; > + rc = vtpm_proxy_wait_for(proxy_dev); > + if (rc) > + return rc; > > - if (!rc) > - proxy_dev->state |= STATE_WAIT_RESPONSE_FLAG; > + kernel_buf = kzalloc(sizeof(proxy_dev->buffer), GFP_KERNEL); > + if (!kernel_buf) > + return -ENOMEM; > > + mutex_lock(&proxy_dev->buf_lock); > + rc = __vtpm_proxy_copy_from(proxy_dev, buf, count); Is this meant to be kernel_buf instead of buf here? To me it reads like it copies to buf from proxy_dev->buffer, and then below it copies over buf with the kzalloc'd kernel_buf. > mutex_unlock(&proxy_dev->buf_lock); > > - if (rc) > - return -EFAULT; > + if (!rc && copy_to_user(buf, kernel_buf, count)) > + rc = -EFAULT; > > - return len; > + kfree(kernel_buf); > + return rc; > } > > /** > @@ -128,36 +156,26 @@ static ssize_t vtpm_proxy_fops_write(struct file *filp, const char __user *buf, > size_t count, loff_t *off) > { > struct proxy_dev *proxy_dev = filp->private_data; > + u8 *kernel_buf; > + int rc; > > - mutex_lock(&proxy_dev->buf_lock); > - > - if (!(proxy_dev->state & STATE_OPENED_FLAG)) { > - mutex_unlock(&proxy_dev->buf_lock); > - return -EPIPE; > - } > - > - if (count > sizeof(proxy_dev->buffer) || > - !(proxy_dev->state & STATE_WAIT_RESPONSE_FLAG)) { > - mutex_unlock(&proxy_dev->buf_lock); > - return -EIO; > - } > - > - proxy_dev->state &= ~STATE_WAIT_RESPONSE_FLAG; > - > - proxy_dev->req_len = 0; > + kernel_buf = kzalloc(sizeof(proxy_dev->buffer), GFP_KERNEL); > + if (!kernel_buf) > + return -ENOMEM; > > - if (copy_from_user(proxy_dev->buffer, buf, count)) { > - mutex_unlock(&proxy_dev->buf_lock); > + if (copy_from_user(kernel_buf, buf, count)) { > + kfree(kernel_buf); > return -EFAULT; > } > > - proxy_dev->resp_len = count; > - > + mutex_lock(&proxy_dev->buf_lock); > + rc = __vtpm_proxy_copy_to(proxy_dev, kernel_buf, count); > mutex_unlock(&proxy_dev->buf_lock); > > wake_up_interruptible(&proxy_dev->wq); > + kfree(kernel_buf); > > - return count; > + return rc; > } > > /* > @@ -295,28 +313,6 @@ static int vtpm_proxy_tpm_op_recv(struct tpm_chip *chip, u8 *buf, size_t count) > return len; > } > > -static int vtpm_proxy_is_driver_command(struct tpm_chip *chip, > - u8 *buf, size_t count) > -{ > - struct tpm_header *hdr = (struct tpm_header *)buf; > - > - if (count < sizeof(struct tpm_header)) > - return 0; > - > - if (chip->flags & TPM_CHIP_FLAG_TPM2) { > - switch (be32_to_cpu(hdr->ordinal)) { > - case TPM2_CC_SET_LOCALITY: > - return 1; > - } > - } else { > - switch (be32_to_cpu(hdr->ordinal)) { > - case TPM_ORD_SET_LOCALITY: > - return 1; > - } > - } > - return 0; > -} > - > /* > * Called when core TPM driver forwards TPM requests to 'server side'. > * > @@ -330,6 +326,7 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip, > static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) > { > struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); > + unsigned int ord = ((struct tpm_header *)buf)->ordinal; > > if (count > sizeof(proxy_dev->buffer)) { > dev_err(&chip->dev, > @@ -338,8 +335,10 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) > return -EIO; > } > > - if (!(proxy_dev->state & STATE_DRIVER_COMMAND) && > - vtpm_proxy_is_driver_command(chip, buf, count)) > + if ((chip->flags & TPM_CHIP_FLAG_TPM2) && ord == TPM2_CC_SET_LOCALITY) > + return -EFAULT; > + > + if (ord == TPM_ORD_SET_LOCALITY) > return -EFAULT; > > mutex_lock(&proxy_dev->buf_lock); > @@ -407,13 +406,20 @@ static int vtpm_proxy_request_locality(struct tpm_chip *chip, int locality) > TPM_ORD_SET_LOCALITY); > if (rc) > return rc; > + > tpm_buf_append_u8(&buf, locality); > > - proxy_dev->state |= STATE_DRIVER_COMMAND; > + mutex_lock(&proxy_dev->buf_lock); > + rc = __vtpm_proxy_copy_to(proxy_dev, buf.data, tpm_buf_length(&buf)); > + mutex_unlock(&proxy_dev->buf_lock); > > - rc = tpm_transmit_cmd(chip, &buf, 0, "attempting to set locality"); > + rc = vtpm_proxy_wait_for(proxy_dev); > + if (rc) > + return rc; > > - proxy_dev->state &= ~STATE_DRIVER_COMMAND; > + mutex_lock(&proxy_dev->buf_lock); > + rc = __vtpm_proxy_copy_from(proxy_dev, buf.data, tpm_buf_length(&buf)); > + mutex_unlock(&proxy_dev->buf_lock); > > if (rc < 0) { > locality = rc; > -- > 2.39.2 >