Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp833391rwb; Thu, 18 Aug 2022 13:07:51 -0700 (PDT) X-Google-Smtp-Source: AA6agR6WQoaRe9PgBBkPM72YLoRT8fvgcvEV+tF2czignBJLxwWgYeR/2EjRQ9XeCao6LHYVegts X-Received: by 2002:a63:5a10:0:b0:429:d91c:e146 with SMTP id o16-20020a635a10000000b00429d91ce146mr3586366pgb.19.1660853270801; Thu, 18 Aug 2022 13:07:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660853270; cv=none; d=google.com; s=arc-20160816; b=p2PHxuBrTvIBqDnYvqCLcYZ0tcXdK+0KFlcSVObA114NZPbb4hiW+5oGyCR4yVq+Pd /x/faxOvbTPu3EZ4Wzyhzw+F1/Jmamdyiujw+0vXUYIfrlyEskGkp2wACipXgVaQtgDI B1R7mlC8V0Xa8kmF5qSwyi7I1kT/0Fm3KeGY7OfkRf+e2cEbrt14H9mT9BDRMFFqoAlr 7CMlkfrUTKMfbSsF2PX6efnQRDijZJHCF3SmAnnQzw3WIG+mdOg19SQNP26sk0iPdl+F J1sR2cSG0/I7nOH8LioJ49yiuVOlimorDvaJlRF3f66817KyGrZ5OTu1Rewn0dVDTQog d5Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=pN7pqer4VDQBcNszwXCuWriKzSK5JQuQKVaTrQYaltM=; b=u0TYvamYMgOLYDzX7B9T06bWlVS2EREyww1YJtQqmqTXAewkS/yeybLDUl1P5Iijh3 yIIlVWEYKD2gbsbx+a/TrWxSeVNqq0IwLxRRZXWA0wpu8cRSfbmgqhCcf2P/tsCTo/E0 yG70IcDZbwVRx9MwgTSVHfnZictOYIgQq7QHv6rAxUOEPpSzGJgaImN/CGyWAhhvSVgL Zj1F9bqXo9OgBBGoWWDlLDpfKbKkaUQ1AV20otmqsbvniZc7B7KDvSjVg6gLG9KcfqCF dx9UMhkBVBZ5vs17t4Fg8C2U3i0rDrB1Da62Gqd//1Y3Nq9DWg3GNMejPj2OZb60spcP 5J3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kylehuey.com header.s=google header.b=eJZZyDCv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o28-20020aa7979c000000b0052adad0f01esi1955332pfp.376.2022.08.18.13.07.39; Thu, 18 Aug 2022 13:07:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kylehuey.com header.s=google header.b=eJZZyDCv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345635AbiHRTsx (ORCPT + 99 others); Thu, 18 Aug 2022 15:48:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241954AbiHRTso (ORCPT ); Thu, 18 Aug 2022 15:48:44 -0400 Received: from mail-yw1-x1129.google.com (mail-yw1-x1129.google.com [IPv6:2607:f8b0:4864:20::1129]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A141A5B7B9 for ; Thu, 18 Aug 2022 12:48:39 -0700 (PDT) Received: by mail-yw1-x1129.google.com with SMTP id 00721157ae682-335624d1e26so69575747b3.4 for ; Thu, 18 Aug 2022 12:48:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kylehuey.com; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=pN7pqer4VDQBcNszwXCuWriKzSK5JQuQKVaTrQYaltM=; b=eJZZyDCvZFocEY1mnX4gNhf3oRrBiDSGvqtESvQlFghziy9F09FK0NXWYew7+Kje/t EYiUjzXF8F1SfwY5eTJn/SlOMC0eAYlX6vXlI6BBe3iss3srKpP32vCLpr1Tm1oKMUMN zg87bqYehNGX06yEK48vPXzcnNUkWK0MFSTmjqiX9GT4kQYkEBsKVx36aCMAt82aWbJe I3tpfUL3jqS3znRjqVz6Nais3mhbhgsyExZDigC6az9B/idmpcCqQsrbFUurVcbQ03qr VBPmy9U+vgbenggQVkmWLrNUBZx6y+nko5YSj1QAgdiIO6xqEJT7OVRswPbW8u68PuP3 71lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=pN7pqer4VDQBcNszwXCuWriKzSK5JQuQKVaTrQYaltM=; b=li4/JMMoQL8OBmhee3aCrKKbSP9g0Oz+wNrqCf9qNzeqFWHzs7F2SR89r764I0e0H/ vBEAhu3SHF6cAht2AjMe/ymhHMXYNufvQWIKW/ARhr96mp/pensEnV323aIJQDqIFTdl odoWzxxsCCtrRRNzs5UR5n+DfqETjt/3pDFS0e2LLCWMNeRZujRSqLdSkeU8VXecmcWb CQ7NyNPTg74uW9ync6GxkZyfw9AjDbqchbDR/x508PkHC5yL2OdW9t9Qwm7O8TAym3tc g49w+0bST7kcIRcU1IXDk1CpPlqdpYSLKiPDfwHtsp7lJaAUJEcTPp/YhglfZezLLl4F gItw== X-Gm-Message-State: ACgBeo31OkCFZR4wsZKDw1WTVTDWE8Q8neUTZa1hrTlTe2UwLNVM+xmF V+pwXM7XAx4U6WplB4Yl7zLsEO/J7QBkHKuOR4u6mmPBr4lYmA== X-Received: by 2002:a25:bd3:0:b0:691:d47a:be78 with SMTP id 202-20020a250bd3000000b00691d47abe78mr4145138ybl.574.1660852118761; Thu, 18 Aug 2022 12:48:38 -0700 (PDT) MIME-Version: 1.0 References: <20220808141538.102394-1-khuey@kylehuey.com> <87ilmpzunz.ffs@tglx> In-Reply-To: <87ilmpzunz.ffs@tglx> From: Kyle Huey Date: Thu, 18 Aug 2022 12:48:23 -0700 Message-ID: Subject: Re: [PATCH v5 1/2] x86/fpu: Allow PKRU to be (once again) written by ptrace. To: Thomas Gleixner Cc: Dave Hansen , Borislav Petkov , Ingo Molnar , x86@kernel.org, "H. Peter Anvin" , Paolo Bonzini , Andy Lutomirski , Peter Zijlstra , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, "Robert O'Callahan" , David Manouchehri , Borislav Petkov , kvm@vger.kernel.org, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 18, 2022 at 3:57 AM Thomas Gleixner wrote: > > Kyle! Hi. > On Mon, Aug 08 2022 at 07:15, Kyle Huey wrote: > > When management of the PKRU register was moved away from XSTATE, emulation > > of PKRU's existence in XSTATE was added for APIs that read XSTATE, but not > > for APIs that write XSTATE. This can be seen by running gdb and executing > > `p $pkru`, `set $pkru = 42`, and `p $pkru`. On affected kernels (5.14+) the > > write to the PKRU register (which gdb performs through ptrace) is ignored. > > > > There are three relevant APIs: PTRACE_SETREGSET with NT_X86_XSTATE, > > sigreturn, and KVM_SET_XSAVE. KVM_SET_XSAVE has its own special handling to > > make PKRU writes take effect (in fpu_copy_uabi_to_guest_fpstate). Push that > > down into copy_uabi_to_xstate and have PTRACE_SETREGSET with NT_X86_XSTATE > > and sigreturn pass in pointers to the appropriate PKRU value. > > > > This also adds code to initialize the PKRU value to the hardware init value > > (namely 0) if the PKRU bit is not set in the XSTATE header to match XRSTOR. > > This is a change to the current KVM_SET_XSAVE behavior. > > You are stating a fact here, but provide 0 justification why this is > correct. Well, the justification is that this *is* the behavior we want for ptrace/sigreturn, and it's very likely the existing KVM_SET_XSAVE behavior in this edge case is an oversight rather than intentional, and in the absence of confirmation that KVM wants the existing behavior (the KVM mailing list and maintainer are CCd) one correct code path is better than one correct code path and one buggy code path. > > > > Changelog since v4: > > Can you please put the change log past the --- seperator line, so it > gets stripped off when the patch is applied? That spares manual fixups. Ok. > > > > Signed-off-by: Kyle Huey > > Cc: Dave Hansen > > Cc: Thomas Gleixner > > Cc: Borislav Petkov > > Cc: kvm@vger.kernel.org # For edge case behavior of KVM_SET_XSAVE > > Cc: stable@vger.kernel.org # 5.14+ > > Fixes: e84ba47e313d ("x86/fpu: Hook up PKRU into ptrace()") > > Can you please use the documented tag ordering? > > https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#patch-submission-notes Ok. > > @@ -1235,6 +1235,24 @@ static int copy_uabi_to_xstate(struct fpstate *fpstate, const void *kbuf, > > for (i = 0; i < XFEATURE_MAX; i++) { > > mask = BIT_ULL(i); > > > > + if (i == XFEATURE_PKRU) { > > + /* > > + * Retrieve PKRU if not in init state, otherwise > > + * initialize it. > > + */ > > + if (hdr.xfeatures & mask) { > > + struct pkru_state xpkru = {0}; > > + > > + if (copy_from_buffer(&xpkru, xstate_offsets[i], > > + sizeof(xpkru), kbuf, ubuf)) > > + return -EFAULT; > > + > > + *pkru = xpkru.pkru; > > + } else { > > + *pkru = 0; > > + } > > + } > > That's really horrible and there is no point in copying the stuff from > the buffer twice: > > @@ -1246,6 +1246,15 @@ static int copy_uabi_to_xstate(struct fp > } > } > > + /* Update the user protection key storage */ > + *pkru = 0; > + if (hdr.xfeatures & XFEATURE_MASK_PKRU) { > + struct pkru_state *xpkru; > + > + xpkru = get_xsave_addr(xsave, XFEATURE_PKRU); > + *pkru = xpkru->pkru; > + } > + > > Hmm? It took me a bit to figure out what this is actually trying to do. To work, it would need to come at the very end of copy_uabi_to_xstate after xsave->header.xfeatures is updated. If you just want to avoid two copies I would counter-propose this though: @@ -1235,7 +1235,19 @@ static int copy_uabi_to_xstate(struct fpstate *fpstate, const void *kbuf, for (i = 0; i < XFEATURE_MAX; i++) { mask = BIT_ULL(i); - if (hdr.xfeatures & mask) { + if (i == XFEATURE_PKRU) { + /* Update the user protection key storage */ + *pkru = 0; + if (hdr.xfeatures & XFEATURE_MASK_PKRU) { + struct pkru_state xpkru = {0}; + + if (copy_from_buffer(&xpkru, xstate_offsets[i], + sizeof(xpkru), kbuf, ubuf)) + return -EFAULT; + + *pkru = xpkru.pkru; + } + } else if (hdr.xfeatures & mask) { void *dst = __raw_xsave_addr(xsave, i); offset = xstate_offsets[i]; Thoughts? This avoids a second copy and avoids having to calculate the offset into the (now potentially compressed) XSTATE. - Kyle > > Thanks, > > tglx