Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4404481imu; Mon, 14 Jan 2019 22:34:22 -0800 (PST) X-Google-Smtp-Source: ALg8bN4w6cTr3c/R/WL9shy87HXZhKtoYX6MXNlLq+ILOCqWwVsadOilgRa40cl0RuWFs+z2dmk+ X-Received: by 2002:a62:4851:: with SMTP id v78mr2416224pfa.97.1547534061933; Mon, 14 Jan 2019 22:34:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547534061; cv=none; d=google.com; s=arc-20160816; b=XEWkvIrnT8/s6qrIB1kqRsBLQ+kgtgqeQ+jNYhtlN5CpA8EUHPefBz6nZuKHAAugfS nOj1/3YWu8v9VeIzWn3ojn0/mRF33Er+6IQSnyAVOGh7Nsq8fUUBvYw39IDzE4PiDUzj LaJnDHpZrHaVNfr0PR+LsB4Y+AYzyntz6JPvovVDZENhQzCI2ZWkrJETsocuiE3PsLDe 9msGqZyg3Bz7XvWhONIWITciEdp4ooSluKgPJiN1NoAOP3lv2u2PHrGs2/LUt5Ynho9A LE0uK0b8MxOCEKzaSwj8WV5L/lmN5IHWvfjbmGeailoluCJAJLYKPyTNs/qQMJQ0APEI l7rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject; bh=8DvGjT3uth37IvA0KWirjg5ECdJ3/lCDHvfyPSzA5bg=; b=kZBmz96L5i1eeUQ5oyQFsWxjUSzL3IHwdsgujKxz9HC9I5zNxsH8+WVYtKhCCyVbJj uXHUhhjGfo9H3BOA1ki6KjMifpBNgfKgMs7/sdtIQcITdVuCF5Wblvdw74DqJdm3COki GxlooUJUhMZAUtbOvym7JOZC9lxnsWcBMIbrlsiWxt0Tj71+rvlCfutVkbTCqVddiH63 0AkEv7+XvYBPgwI9RCausuPH773uTkXa1M9Ae6gf2qknq6Rshan4EAHd1vkA4EQaBh9/ Q6JyimA9/KfYFJRyOADbJrVvByjdInbDvlr6lfJCbfARrqjvq+Tf4so6aC8gC+wKBfWk 50rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l129si2687387pfl.284.2019.01.14.22.34.07; Mon, 14 Jan 2019 22:34:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728156AbfAOFis (ORCPT + 99 others); Tue, 15 Jan 2019 00:38:48 -0500 Received: from terminus.zytor.com ([198.137.202.136]:50157 "EHLO mail.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727877AbfAOFis (ORCPT ); Tue, 15 Jan 2019 00:38:48 -0500 Received: from hanvin-mobl2.amr.corp.intel.com (jfdmzpr03-ext.jf.intel.com [134.134.139.72]) (authenticated bits=0) by mail.zytor.com (8.15.2/8.15.2) with ESMTPSA id x0F5bmsP2381477 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO); Mon, 14 Jan 2019 21:37:49 -0800 Subject: Re: [PATCH v3 0/6] Static calls From: "H. Peter Anvin" To: Andy Lutomirski Cc: Jiri Kosina , Linus Torvalds , Josh Poimboeuf , Nadav Amit , Peter Zijlstra , the arch/x86 maintainers , Linux List Kernel Mailing , Ard Biesheuvel , Steven Rostedt , Ingo Molnar , Thomas Gleixner , Masami Hiramatsu , Jason Baron , David Laight , Borislav Petkov , Julia Cartwright , Jessica Yu , Rasmus Villemoes , Edward Cree , Daniel Bristot de Oliveira References: <20190110203023.GL2861@worktop.programming.kicks-ass.net> <20190110205226.iburt6mrddsxnjpk@treble> <20190111151525.tf7lhuycyyvjjxez@treble> <12578A17-E695-4DD5-AEC7-E29FAB2C8322@zytor.com> <5cbd249a-3b2b-6b3b-fb52-67571617403f@zytor.com> <207c865e-a92a-1647-b1b0-363010383cc3@zytor.com> <9f60be8c-47fb-195b-fdb4-4098f1df3dc2@zytor.com> <8ca16cca-101d-1d1b-b3da-c9727665fec8@zytor.com> Message-ID: Date: Mon, 14 Jan 2019 21:37:44 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/14/19 9:01 PM, H. Peter Anvin wrote: > > This could be as simple as spinning for a limited time waiting for > states 0 or 3 if we are not the patching CPU. It is also not necessary > to wait for the mask to become zero for the first sync if we find > ourselves suddenly in state 4. > So this would look something like this for the #BP handler; I think this is safe. This uses the TLB miss on the write page intentionally to slow down the loop a bit to reduce the risk of livelock. Note that "bp_write_addr" here refers to the write address for the breakpoint that was taken. state = atomic_read(&bp_poke_state); if (state == 0) return 0; /* No patching in progress */ recheck: clear bit in mask switch (state) { case 1: case 4: if (smp_processor_id() != bp_patching_cpu) { int retries = NNN; while (retries--) { invlpg if (*bp_write_addr != 0xcc) goto recheck; state = atomic_read(&bp_poke_state); if (state != 1 && state != 4) goto recheck; } } state = cmpxchg(&bp_poke_state, 1, 4); if (state != 1 && state != 4) goto recheck; atomic_write(bp_write_addr, bp_old_value); break; case 2: if (smp_processor_id() != bp_patching_cpu) { invlpg state = atomic_read(&bp_poke_state); if (state != 2) goto recheck; } complete patch sequence remove breakpoint break; case 3: case 0: /* * If we are here, the #BP will go away on its * own, or we will re-take it if it was a "real" * breakpoint. */ break; } return 1;