Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp875508imm; Wed, 10 Oct 2018 05:49:25 -0700 (PDT) X-Google-Smtp-Source: ACcGV63SpNXOwxS1Ux0xsyKl9xe6JtbxveszsU8KkZWFMA3UNEWUmU2krBQJcGD77IpueTcE1RJv X-Received: by 2002:a63:2903:: with SMTP id p3-v6mr24537123pgp.188.1539175765610; Wed, 10 Oct 2018 05:49:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539175765; cv=none; d=google.com; s=arc-20160816; b=x4+c6haUf3IbGHCKzsqeDAIEgfgltUGYAbTpZQvtjCQe3cjaWPLL2AF/JCbEg2VdzT A8B++xHcnHAPX2tempsA18bSqoIfHMhA9qWwwgzvY70DDka06Edk1D3fFA+/u0wmcQVp bhYHr1cSSeojd66I472aNDQOZ34NozMXpyFcZssHZMimj2AmQkfsBRGNZ7cFQuhPNrV/ 64ccjda2f428mR7nkaPSHe8UrmCUhZOGkfaQGPQamVVx1AsDxVqQM32VLfviCX3n58v0 MUgxL4dBDZwjaQGxXN4UxPXRd6KdJEeNuVKM2uNyQVWrXBHO8IvKn7g0MtWvowdRJNrm AkDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=ggaA56pf7pVbzrruaBHuU2q1UHgSHSfV994RqEEv3cI=; b=ZAe2ra+cIMmHb8imX85ZTDjDRcLJBjiZyyoCaHQyTi2MK6esxCxj85IlhP+0HHt8s1 24zmpJtvh192DimOR09LlkKY7p/w2P6hlsqH2gk4eqbVKmiDoi3TF763WBfVxFfKL5Xf urnt5jAL7s2Ny5Eld9GRT2VKBoFKWURWcx29joRrDwJBPxzSQa86jOQ2k+hsSnI2EF1W lSId0tPWkeOxsPww2Yjvqk5PPLst3SA5/SxWG1sxENf0qG3gGWSyXu6CIPajpf696OO5 b8Q2jD9WCEdTXmGDfPDYJcvcYtufaYkZ0ymRk+wRzYIDQi7OYdiqGJSZGDwAB7E3NojO 2+lA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m17-v6si26052896pfe.187.2018.10.10.05.49.10; Wed, 10 Oct 2018 05:49:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727168AbeJJUJe (ORCPT + 99 others); Wed, 10 Oct 2018 16:09:34 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:45396 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726636AbeJJUJe (ORCPT ); Wed, 10 Oct 2018 16:09:34 -0400 Received: from p5492fe24.dip0.t-ipconnect.de ([84.146.254.36] helo=nanos) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1gADtW-0007vN-7o; Wed, 10 Oct 2018 14:47:18 +0200 Date: Wed, 10 Oct 2018 14:47:17 +0200 (CEST) From: Thomas Gleixner To: David Woodhouse cc: Juergen Gross , linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org, boris.ostrovsky@oracle.com, hpa@zytor.com, mingo@redhat.com, bp@alien8.de, stable@vger.kernel.org, Waiman.Long@hp.com, peterz@infradead.org Subject: Re: [PATCH 2/2] xen: make xen_qlock_wait() nestable In-Reply-To: Message-ID: References: <20181001071641.19282-1-jgross@suse.com> <20181001071641.19282-3-jgross@suse.com> <47686a61dfc06aa5afb05a893b9a56e6eb46763d.camel@infradead.org> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 10 Oct 2018, David Woodhouse wrote: > On Wed, 2018-10-10 at 14:30 +0200, Thomas Gleixner wrote: > > On Wed, 10 Oct 2018, David Woodhouse wrote: > > > > > On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote: > > > > - /* If irq pending already clear it and return. */ > > > > + /* Guard against reentry. */ > > > > + local_irq_save(flags); > > > > + > > > > + /* If irq pending already clear it. */ > > > > if (xen_test_irq_pending(irq)) { > > > > xen_clear_irq_pending(irq); > > > > - return; > > > > + } else if (READ_ONCE(*byte) == val) { > > > > + /* Block until irq becomes pending (or a spurious wakeup) */ > > > > + xen_poll_irq(irq); > > > > } > > > > > > > > > Does this still allow other IRQs to wake it from xen_poll_irq()? > > > > > > In the case where process-context code is spinning for a lock without > > > disabling interrupts, we *should* allow interrupts to occur still... > > > does this? > > > > Yes. Look at it like idle HLT or WFI. You have to disable interrupt before > > checking the condition and then the hardware or in this case the hypervisor > > has to bring you back when an interrupt is raised. > > > > If that would not work then the check would be racy, because the interrupt > > could hit and be handled after the check and before going into > > HLT/WFI/hypercall and then the thing is out until the next interrupt comes > > along, which might be never. > > Right, but in this case we're calling into the hypervisor to poll for > one *specific* IRQ. Everything you say is true for that specific IRQ. > > My question is what happens to *other* IRQs. We want them, but are they > masked? I'm staring at the Xen do_poll() code and haven't quite worked > that out... Ah, sorry. That of course has to come back like HLT/WFI for any interrupt, but I have no idea what the Xen HV is doing there. Thanks, tglx