Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp1543383ybx; Thu, 7 Nov 2019 13:24:26 -0800 (PST) X-Google-Smtp-Source: APXvYqzXipbq4Hg+PrrrKwNMEPhY1wDMBb/WoXDZFujw4AHjj2qWuEHVlBjEmgjYVRepcOoXDFvq X-Received: by 2002:aa7:d4d8:: with SMTP id t24mr6274376edr.40.1573161866065; Thu, 07 Nov 2019 13:24:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573161866; cv=none; d=google.com; s=arc-20160816; b=gMhnQItnco0P3yVkZ2/gMlan2qV6/4BukTwv67i5yn6LpQEBL8e5veQ+VBT3pGH0jT rUKI3ZTL4p7teSzaZNW48tfhynFg9Zmj8heLgAWLUArFjSLr0CAR5m22Jh6rli/cC7cc xlkt8xPNXb4B0oUkD8nMbre0KJlIHkMxeAyY0DLBs0GuFcO8+QvOt3Uu2hWIYFqhA+Xo UpuOPlSMJNpfASDoVUhmJ+vJFt6XS1ASupX+WVpwvm13v0pB3VqTJ7nV4GGqRUa4Y4+5 DESfu22aLTVDd4AnUgCJ4aqX2QIDRIvAvbje4VFI9kCwfN1cEiJd3JtwryKOg5wL4Bet JdsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=v/BX3oNeKaJOY2n9A5bNCVlkAyXxFDfdSfqkX7PXarU=; b=DHxGp28C2q443mldwq3Wdju/y6M+VFjc9pwHAJPPPgZKzKdaq+hjoYWAyeZwxnodFc emeyAZoUzupH5A40YoX8m+Bp7BtyPkAgKMF4j65ahjhVygzTDVUizx3k39nn/SA3amkA L5neS1JZAy36IE3MzT0x2SacW1ThUx8yKOQLfnPqeAS/u1yUoMuNyalcGB7rgrouHgaT y6IIQYEc0+xaJXGjZjLqj83x813MjuMXRaIrJ9b3q3CSOOYaqBbS+3MQjRS/6VI/kczr qqJb5aKicrruqjYhCGhHozcC8BCCoRRwJi9QfK7EgahJRjii+btRgWSdw78jQX8uLxoh KTRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f6si2339685ejx.0.2019.11.07.13.24.03; Thu, 07 Nov 2019 13:24:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727417AbfKGVVl (ORCPT + 99 others); Thu, 7 Nov 2019 16:21:41 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:49659 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727352AbfKGVVj (ORCPT ); Thu, 7 Nov 2019 16:21:39 -0500 Received: from p5b06da22.dip0.t-ipconnect.de ([91.6.218.34] helo=nanos) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iSpDk-00061v-Lo; Thu, 07 Nov 2019 22:21:36 +0100 Date: Thu, 7 Nov 2019 22:21:35 +0100 (CET) From: Thomas Gleixner To: Vitaly Kuznetsov cc: Sasha Levin , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Roman Kagan , Michael Kelley , Joe Perches Subject: Re: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case In-Reply-To: <877e4bbyw2.fsf@vitty.brq.redhat.com> Message-ID: References: <20191027151938.7296-1-vkuznets@redhat.com> <877e4bbyw2.fsf@vitty.brq.redhat.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 7 Nov 2019, Vitaly Kuznetsov wrote: > Vitaly Kuznetsov writes: > > > When sending an IPI to a single CPU there is no need to deal with cpumasks. > > With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU > > cycles) improvement with smp_call_function_single() loop benchmark. The > > optimization, however, is tiny and straitforward. Also, send_ipi_one() is > > important for PV spinlock kick. > > > > I was also wondering if it would make sense to switch to using regular > > APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU > > cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu, > > vector)). > > > > Signed-off-by: Vitaly Kuznetsov > > --- > > Changes since v2: > > - Check VP number instead of CPU number against >= 64 [Michael] > > - Check for VP_INVAL > > Hi Sasha, > > do you have plans to pick this up for hyperv-next or should we ask x86 > folks to? I'm picking up the constant TSC one anyway, so I can just throw that in as well. Thanks, tglx