Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp548017ybh; Wed, 18 Mar 2020 04:54:38 -0700 (PDT) X-Google-Smtp-Source: ADFU+vspdoOpNEV8g1y+29pfWNKCyaklPPwc7tDCxjADpMeHh+4+V+6Vkjb7dwo8U7aZjkzcgXzr X-Received: by 2002:a05:6830:19c7:: with SMTP id p7mr3640946otp.79.1584532477812; Wed, 18 Mar 2020 04:54:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584532477; cv=none; d=google.com; s=arc-20160816; b=Ak2+JaxvzlI0ST3ZB3x/85P+cQKd34m5ybLbyO0QU9kjxzf+Lii74V+u/7jK/LuQww mY9OHUFCUY53Voyu3FhdC4K8Cvo8fdLH6EWYK+p04Vccgjw/+RZQU6kJtkicrmL8MjTN ajtJgMeCet+cqU8tB5oaAClHoIf7INbPDyym3w2VZyqXF5K6KDUr/17dNsX6FXMJApPS RwTVP3BE5cpi/nTHJC72cnvXwA0fRa9uDC5t0NtOQnh1qeUG8bPvM5A228pgni5zLscK M5n6ySRMUo3mR00h+UucqzNx9NTJQfZoEELGTg1RAtd3WQW/5ITnclAw2n/MA0+b5KIp OCHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=9/WJaCv9ZOtFs1SkzP7ZriD9F7x/e+KjAczrmU28js0=; b=I5Kd1XOSylNKbKKrwKYzPYLptWWxcS6gU4IPlBpBj4IeofojdcuUB3VYeBiCdvg9Op b9BaiInXwxzZUEF8tbO9rJ/HbwYUuLQFAA8LueU+TCVTsyGp3slDmY34SZOvMxg4pwlV Lci7gtoSYjQGqCKno3lFGc7fn7sTQ4ivJq+EbYrHCtqZ4Hi/7MGRCbdjat5dHwtHhQ45 /hoEelR9xZXZ1cSvIVEM59U3mi+9AnrWfMYOg/ktePV/rawESFCr9BBhspWeXUJlZxgA lMC3d5Iag+uxIh4ZvJ+b0Lty81EVV5r+fDyn882Z2b2BceSoWPfXJd7Hr850u0SrsbO4 l0Bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a5si3472639otr.237.2020.03.18.04.54.24; Wed, 18 Mar 2020 04:54:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727180AbgCRLx1 (ORCPT + 99 others); Wed, 18 Mar 2020 07:53:27 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:57178 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726747AbgCRLx1 (ORCPT ); Wed, 18 Mar 2020 07:53:27 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jEXFu-0003U8-HV; Wed, 18 Mar 2020 12:53:02 +0100 Received: by nanos.tec.linutronix.de (Postfix, from userid 1000) id C7EE71040C5; Wed, 18 Mar 2020 12:53:01 +0100 (CET) From: Thomas Gleixner To: Joel Fernandes , Tim Chen Cc: Julien Desfossez , Peter Zijlstra , Vineeth Remanan Pillai , Aubrey Li , Nishanth Aravamudan , Ingo Molnar , Paul Turner , Linus Torvalds , Linux List Kernel Mailing , Dario Faggioli , =?utf-8?B?RnLDqWTDqXJpYw==?= Weisbecker , Kees Cook , Greg Kerr , Phil Auld , Aaron Lu , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , "Luck\, Tony" Subject: Re: [RFC PATCH v4 00/19] Core scheduling v4 In-Reply-To: References: <3c3c56c1-b8dc-652c-535e-74f6dcf45560@linux.intel.com> <20200212230705.GA25315@sinkpad> <29d43466-1e18-6b42-d4d0-20ccde20ff07@linux.intel.com> <20200221232057.GA19671@sinkpad> <20200317005521.GA8244@google.com> Date: Wed, 18 Mar 2020 12:53:01 +0100 Message-ID: <877dzhc21u.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Joel, Joel Fernandes writes: > We have only 2 cores (4 HT) on many devices. It is not an option to > dedicate a core to only running trusted code, that would kill > performance. Another option is to designate a single HT of a > particular core to run both untrusted code and an interrupt handler -- > but as Thomas pointed out, this does not work for per-CPU interrupts > or managed interrupts, and the softirqs that they trigger. But if we > just consider interrupts for which we can control the affinities (and > assuming that most interrupts can be controlled like that), then maybe > it will work? In the ChromeOS model, each untrusted task is in its own > domain (cookie). So untrusted tasks cannot benefit from parallelism > (in our case) anyway -- so it seems reasonable to run an affinable > interrupt and an untrusted task, on a particular designated core. > > (Just thinking out loud...). Another option could be a patch that > Vineeth shared with me (that Peter experimentally wrote) where he > sends IPI from an interrupt handler to a sibling running untrusted > guest code which would result in it getting paused. I am hoping > something like this could work on the host side as well (not just for > guests). We could also set per-core state from the interrupted HT, > possibly IPI'ing the untrusted sibling if we have to. If sibling runs > untrusted code *after* the other's siblings interrupt already started, > then the schedule() loop on the untrusted sibling would spin knowing > the other sibling has an interrupt in progress. The softirq is a real > problem though. Perhaps it can also set similar per-core state. There is not much difference between bringing the sibling out of guest mode and bringing it out of host user mode. Adding state to force spinning until the other side has completed is not rocket science either. But the whole concept is prone to starvation issues and full of nasty corner cases. From experiments I did back in the L1TF days I'm pretty much convinced that this can't result in a generaly usable solution. Let me share a few thoughts what might be doable with less horrors, but be aware that this is mostly a brain dump of half thought out ideas. 1) Managed interrupts on multi queue devices It should be reasonably simple to force a reduced number of queues which would in turn allow to shield one ore two CPUs from such interrupts and queue handling for the price of indirection. 2) Timers and softirqs If device interrupts are targeted to "safe" CPUs then the amount of timers and soft interrupt processing will be reduced as well. That still leaves e.g. network TX side soft interrupts when the task running on a shielded core does networking. Maybe that's a non issue, but I'm not familiar enough with the network maze to give an answer. A possible workaround would be to force softirq processing into thread context so everything is under scheduler control. How well that scales is a different story. That would bring out the timer_list timers and reduce the potential surface to hrtimer expiry callbacks. Most of them should be fine (doing wakeups or scheduler housekeeping of some sort). For the others we might just utilize the mechanism which PREEMPT_RT uses and force them off into softirq expiry mode. Thanks, tglx