Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp2218770pxb; Thu, 3 Feb 2022 01:51:05 -0800 (PST) X-Google-Smtp-Source: ABdhPJxYXvPUALVoaEWOEFz0SGpw80+A+pGx70vteP5CFMxXTxgfpZAJT4f9ukIj4ZfC/jnd69Il X-Received: by 2002:a17:907:3d01:: with SMTP id gm1mr20065791ejc.22.1643881865132; Thu, 03 Feb 2022 01:51:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643881865; cv=none; d=google.com; s=arc-20160816; b=W93io7tlWlmM6JqxjNtPTOv4Xd5KVaPbAaTBzW8DOSME8ON5IU5d58OZUClCH0ZS+L 4P2gjU7FvTFdu/tEBb0eYvvKPAh1EQxJQiRmwsbYZNBhCx9n59l3byPLK5h4pSJ9tPzw oqx4oBb2TdWC+dezEescD3fWxcJ73fimdcYLI9SajJJNxwU3xPHK3SSS8EJaySUvUJNX MLmMzODSn8f/2wAsPU16T2CFAZR/yfmAoXKrtRpR4B4mTvjrUo2l6oa29w+U9jZjq83T vodeqDMNLTD2kyONDW9a8RC09Y2nTnXSA9c6rX6PN7CLA1/v1EwoBG9ZCE72Fol/HEM8 BLpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:in-reply-to:date:references:subject:cc:to:from; bh=yUbxvbhhDZtfMPBmqlR/trvYf/I7b0IJmAagQ0fgtjE=; b=dMFDWJYf19qU+HDqIb9Ir6MfVZ7Q+OndLNlQdThnaEcluSa7j+SYVwtkM+Ha1ktFiJ 80tWfFtsBiXpEmreDG7Q9rWd/fO95LhJKFXsL8ESnKYWNasimqWk9KAYWioV/7hHXqAH ft2MTw/FRdSWUqjZuAOZLo9PQq4yDllaF8PHa9ukmGN/khUVomp3CG3wqYigfqxm/GM1 xD/KiVwJrSRfBhtM9UZoX5u3AWhR3rRZHJzZHCpOFRpuU73oM+bsL7P6/RT+92tb2wWc 4GZ/RfvUujdL79M3lIAgreA/mpDX3mhLmbFJrwWzxP06dwVQNAmJBy1ozvmyLzDt8pgO AuIw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gt42si12913943ejc.438.2022.02.03.01.50.40; Thu, 03 Feb 2022 01:51:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235372AbiBAUDX convert rfc822-to-8bit (ORCPT + 99 others); Tue, 1 Feb 2022 15:03:23 -0500 Received: from albireo.enyo.de ([37.24.231.21]:53550 "EHLO albireo.enyo.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234665AbiBAUDW (ORCPT ); Tue, 1 Feb 2022 15:03:22 -0500 Received: from [172.17.203.2] (port=47847 helo=deneb.enyo.de) by albireo.enyo.de ([172.17.140.2]) with esmtps (TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) id 1nEzMv-00H7gd-QZ; Tue, 01 Feb 2022 20:03:13 +0000 Received: from fw by deneb.enyo.de with local (Exim 4.94.2) (envelope-from ) id 1nEzMv-000OiE-CV; Tue, 01 Feb 2022 21:03:13 +0100 From: Florian Weimer To: Mathieu Desnoyers Cc: Peter Zijlstra , linux-kernel@vger.kernel.org, Thomas Gleixner , "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , Paul Turner , linux-api@vger.kernel.org, Christian Brauner , David.Laight@ACULAB.COM, carlos@redhat.com, Peter Oskolkov Subject: Re: [RFC PATCH 2/3] rseq: extend struct rseq with per thread group vcpu id References: <20220201192540.10439-1-mathieu.desnoyers@efficios.com> <20220201192540.10439-2-mathieu.desnoyers@efficios.com> Date: Tue, 01 Feb 2022 21:03:13 +0100 In-Reply-To: <20220201192540.10439-2-mathieu.desnoyers@efficios.com> (Mathieu Desnoyers's message of "Tue, 1 Feb 2022 14:25:39 -0500") Message-ID: <87bkzqz75q.fsf@mid.deneb.enyo.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Mathieu Desnoyers: > If a thread group has fewer threads than cores, or is limited to run on > few cores concurrently through sched affinity or cgroup cpusets, the > virtual cpu ids will be values close to 0, thus allowing efficient use > of user-space memory for per-cpu data structures. From a userspace programmer perspective, what's a good way to obtain a reasonable upper bound for the possible tg_vcpu_id values? I believe not all users of cgroup cpusets change the affinity mask. > diff --git a/kernel/rseq.c b/kernel/rseq.c > index 13f6d0419f31..37b43735a400 100644 > --- a/kernel/rseq.c > +++ b/kernel/rseq.c > @@ -86,10 +86,14 @@ static int rseq_update_cpu_node_id(struct task_struct *t) > struct rseq __user *rseq = t->rseq; > u32 cpu_id = raw_smp_processor_id(); > u32 node_id = cpu_to_node(cpu_id); > + u32 tg_vcpu_id = task_tg_vcpu_id(t); > > if (!user_write_access_begin(rseq, t->rseq_len)) > goto efault; > switch (t->rseq_len) { > + case offsetofend(struct rseq, tg_vcpu_id): > + unsafe_put_user(tg_vcpu_id, &rseq->tg_vcpu_id, efault_end); > + fallthrough; > case offsetofend(struct rseq, node_id): > unsafe_put_user(node_id, &rseq->node_id, efault_end); > fallthrough; Is the switch really useful? I suspect it's faster to just write as much as possible all the time. The switch should be well-predictable if running uniform userspace, but still …