Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1037882imm; Fri, 14 Sep 2018 10:08:22 -0700 (PDT) X-Google-Smtp-Source: ANB0VdalADl0M5xL/0V1MjnkighmDw9DgBgYfSbUScxghOwlpcgOPovDKNJaYUutgMwhN/eZ8mQV X-Received: by 2002:a17:902:7b96:: with SMTP id w22-v6mr12986093pll.24.1536944902711; Fri, 14 Sep 2018 10:08:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536944902; cv=none; d=google.com; s=arc-20160816; b=kIBDAq97HWITMKNIJCLwUWFTw23TaO1l+DSeTxj1yj52C7NtOCNdscyNwvPvsqImTZ NWflnwAeK2mqhdkiD/+nJKDononFwCRhtwY6uMwGzM8ALPJkxTE8O22vSALCsmL9XxJv rS4zG9OOlnlT6aYRhv0o1IaPLRKzkzv9IxxYZHKfY2iUnPnEMkF4z44XKFHVvSFphINs ELRyaVDbqRPyPQXFKcEL/hqmbt4yplyG5o5bo7rB7Xb2odNrCpGcRggJDz/ERvrKazgA 6w8LQ9ypc1KZCalxigNFa810SXkQy46F/da7JFDCUProlrTxfzDK26kshJzOZMkEHZFq XSag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject :dkim-signature; bh=pGxRt0ER/Y461WOoR10RP+CxMr5JnDyPwp78bs+aAKk=; b=NfMS/WEQVcBanpicjXJs7HvAT+HZDDKmRmLwAkH91g+bpxzoXaFJXM4VWbO6LaeUFr cuxJappc9tORMJvLxvIKTmwU1qTh4iWMoUk/vN608hZN07fjj0/YfjQfrEjyAYbuLRsf hluOqy0jzKHsevbN73PXww7NSP6bWBnGIi8iQN2m8YJ8czX8BSdAwBBf76J/GPbAoRHG 9AbI6SDSzlQkqPmtYPYS2G/psVzjtlGMlDl1jix3tmnYuc7Rom1EHtTBxZEWHG2IyncG xV9rPgT7E3/YTopq4D8BBnMxZDUdVvCkdwVXWO39RvPZIJkem4+wDlI3zSR9k8HlvqxQ 5qJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=4USV956P; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g12-v6si7956951pgl.635.2018.09.14.10.08.07; Fri, 14 Sep 2018 10:08:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=4USV956P; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728158AbeINWXC (ORCPT + 99 others); Fri, 14 Sep 2018 18:23:02 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:43206 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726947AbeINWXC (ORCPT ); Fri, 14 Sep 2018 18:23:02 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w8EH4NKN059416; Fri, 14 Sep 2018 17:07:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=pGxRt0ER/Y461WOoR10RP+CxMr5JnDyPwp78bs+aAKk=; b=4USV956PhLH1nHLsXogahZAip9u7pGO2+FzxaPcbwVzqtpO3qlNaWt4eEBoSrOzFBXI/ 5EwuXlB+0+ZI+HRbpx3LrvvwhLH9HbIxm4B58J7OeUaYb8X4LIoJjlbMZz4deQktru/F 6oQlPbQondNJ2CYpWu1lLUa0RPUl6TmM39Uw7goTdFm6hvrRSxKsvJh0zFP3ESxiKbwL HSlK6uDrWPc0CvhrvnFlRmj3fbO967+MuV2iQQ4lmissjDkcNdt6ILcKxSjYmePaQlY0 isQ4FN4F9FPI402Oz6ejD3rd9pIcVGc5bONMBZYNBbxPL1l4xEs8eA7Oq1p6PqcZEPo8 Vw== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2130.oracle.com with ESMTP id 2mc5uu09mh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 14 Sep 2018 17:07:04 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w8EH6vom021291 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 14 Sep 2018 17:06:57 GMT Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w8EH6tBU032596; Fri, 14 Sep 2018 17:06:56 GMT Received: from [192.168.1.44] (/24.9.64.241) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 14 Sep 2018 10:06:55 -0700 Subject: Re: Redoing eXclusive Page Frame Ownership (XPFO) with isolated CPUs in mind (for KVM to isolate its guests per CPU) To: Julian Stecklina , Linus Torvalds Cc: David Woodhouse , Konrad Rzeszutek Wilk , juerg.haefliger@hpe.com, deepa.srinivasan@oracle.com, Jim Mattson , Andrew Cooper , Linux Kernel Mailing List , Boris Ostrovsky , linux-mm , Thomas Gleixner , joao.m.martins@oracle.com, pradeep.vincent@oracle.com, Andi Kleen , kanth.ghatraju@oracle.com, Liran Alon , Kees Cook , Kernel Hardening , chris.hyser@oracle.com, Tyler Hicks , John Haxby , Jon Masters References: From: Khalid Aziz Organization: Oracle Corp Message-ID: <5efc291c-b0ed-577e-02d1-285d080c293d@oracle.com> Date: Fri, 14 Sep 2018 11:06:53 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9016 signatures=668708 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1809140174 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/12/2018 09:37 AM, Julian Stecklina wrote: > Julian Stecklina writes: > >> Linus Torvalds writes: >> >>> On Fri, Aug 31, 2018 at 12:45 AM Julian Stecklina wrote: >>>> >>>> I've been spending some cycles on the XPFO patch set this week. For the >>>> patch set as it was posted for v4.13, the performance overhead of >>>> compiling a Linux kernel is ~40% on x86_64[1]. The overhead comes almost >>>> completely from TLB flushing. If we can live with stale TLB entries >>>> allowing temporary access (which I think is reasonable), we can remove >>>> all TLB flushing (on x86). This reduces the overhead to 2-3% for >>>> kernel compile. >>> >>> I have to say, even 2-3% for a kernel compile sounds absolutely horrendous. >> >> Well, it's at least in a range where it doesn't look hopeless. >> >>> Kernel bullds are 90% user space at least for me, so a 2-3% slowdown >>> from a kernel is not some small unnoticeable thing. >> >> The overhead seems to come from the hooks that XPFO adds to >> alloc/free_pages. These hooks add a couple of atomic operations per >> allocated (4K) page for book keeping. Some of these atomic ops are only >> for debugging and could be removed. There is also some opportunity to >> streamline the per-page space overhead of XPFO. > > I've updated my XPFO branch[1] to make some of the debugging optional > and also integrated the XPFO bookkeeping with struct page, instead of > requiring CONFIG_PAGE_EXTENSION, which removes some checks in the hot > path. These changes push the overhead down to somewhere between 1.5 and > 2% for my quad core box in kernel compile. This is close to the > measurement noise, so I take suggestions for a better benchmark here. > > Of course, if you hit contention on the xpfo spinlock then performance > will suffer. I guess this is what happened on Khalid's large box. > > I'll try to remove the spinlocks and add fixup code to the pagefault > handler to see whether this improves the situation on large boxes. This > might turn out to be ugly, though. > Hi Julian, I ran tests with your updated code and gathered lock statistics. Change in system time for "make -j60" was in the noise margin (It actually went up by about 2%). There is some contention on xpfo_lock. Average wait time does not look high compared to other locks. Max hold time looks a little long. From /proc/lock_stat: &(&page->xpfo_lock)->rlock: 29698 29897 0.06 134.39 15345.58 0.51 422474670 960222532 0.05 30362.05 195807002.62 0.20 Nevertheless even a smaller average wait time can add up. -- Khalid