Received: by 2002:a05:6a10:2785:0:0:0:0 with SMTP id ia5csp3233367pxb; Tue, 12 Jan 2021 09:26:28 -0800 (PST) X-Google-Smtp-Source: ABdhPJzfmxsf9loIm8IUGq1ZfPiHtO53/GEmPwvkaXjlOTdl8RwD5ejUpR+xX2TF6pJudLTNrj78 X-Received: by 2002:a17:907:2d8b:: with SMTP id gt11mr3998431ejc.32.1610472388755; Tue, 12 Jan 2021 09:26:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1610472388; cv=none; d=google.com; s=arc-20160816; b=HW9o+SmVPQkk1a1VZeyJ+OYD5ssRAZxIjhhhIdvDNfP0FJjnd+lBQityffauON/dY8 MZiCdzjBBKSk4Ul4tJommvQhdjBquJdO/TOlW4qxUDtdK7w5vEO2uF2ABloUHMC5Uni1 OdZA2vJleNFM0tqMIesCxwi8Do2fp8LDmfSvkix3n2vkPXw44Eo3TFF8MBpu3OxqoMEv f4PA3JesNoXfc2fGLZLFLmk8+S9dXErYPL3I5qr4BE/T7r0+EtYfLFEllD+nF20dvIul RRjzwqZKfKdptTR0qnx8oR+qNvAv84GvxlinUpB97X4+eXtqbm3awhtWz6iIvErycpbo jGzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=ga8PxBvUWnek8SYzmj+iknOjA0kLTSv3XLOwB8k7hko=; b=PACpIc1pgeEuVqQBV7pYqol11HuOYKt+3m3GnybuEzWW2RaA4qQ/mtY4UM9IpBF5JF lGY6zLLE9t+RtuRSaK9wL/66JjYjPmYeLfwENXumkRA9Bd4YAMD1Z88fInErnAUeX5GV 8dtSkpLeszoR6+w5Cx09m754EQWXnyhkOD9I0zxw0ivTedV7DRpNt+hPNc8m7Axep8KQ ON1wRgFVRt2iwLtBPUhI/LvmoGseXNg834PC/4RGnVpyPOFlzY1mGsF5UZxt01pIfpAB ahOg5KlqJqmkycrU2pY2rymVBe7effJj+nNL/3S3G2PJh8JXce1IE1vxiXtJKPAKO8O5 9P8A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kM63G85G; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b14si1602144edw.332.2021.01.12.09.26.05; Tue, 12 Jan 2021 09:26:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kM63G85G; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392534AbhALRWS (ORCPT + 99 others); Tue, 12 Jan 2021 12:22:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:56084 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2392469AbhALRWQ (ORCPT ); Tue, 12 Jan 2021 12:22:16 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8535723125 for ; Tue, 12 Jan 2021 17:21:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1610472094; bh=ELOARBBiofdYIx02xC2M8PkPXYlHSTVJmEKHZzgBULE=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=kM63G85GYcDem7lfwRdnJAqLP1ebW9sR8qmat/Ayek+uF749MSGlAaBYStRjg5TBD b2zNPH5RoM2/8UgPTWlOiLgKPSqlq9ZTGoOIo9c2o5+OBt6VUQR/TqupvBv9QHcEzm CPDDnw6RxTjerTu1EQDG6pkwb4zqEp8vvHbI37wZ+x0YNxDCVVb5bRID+Yl53YiPwg +59dFtO4wISZG6pDx6fqTnTCVo8dkOhqitf1oBt18J/pGzVWC/Kjcl2pv2fAGY99T2 VvsHxXEpTFdAotUwGFio7bHOhxC9LOEVA52Oen5psYp6uJmB0qOgUGdZfU5VxHtri7 pm3S2EOLos/ig== Received: by mail-ej1-f45.google.com with SMTP id d17so4593358ejy.9 for ; Tue, 12 Jan 2021 09:21:34 -0800 (PST) X-Gm-Message-State: AOAM5310DBLjSuoY0dw0vTy01+TOthbPsB8/CvRyI4weXWzG2/r8z1NW D7chUqFbPKSl2dNVQsGu16lA4T0zNaHRgHg4dWWYSw== X-Received: by 2002:a17:906:351a:: with SMTP id r26mr3767359eja.204.1610472092963; Tue, 12 Jan 2021 09:21:32 -0800 (PST) MIME-Version: 1.0 References: <20210111214452.1826-2-tony.luck@intel.com> <20210111222057.GA2369@agluck-desk2.amr.corp.intel.com> <20210112171628.GA15664@agluck-desk2.amr.corp.intel.com> In-Reply-To: <20210112171628.GA15664@agluck-desk2.amr.corp.intel.com> From: Andy Lutomirski Date: Tue, 12 Jan 2021 09:21:21 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 1/3] x86/mce: Avoid infinite loop for copy from user recovery To: "Luck, Tony" Cc: Andy Lutomirski , Borislav Petkov , X86 ML , Andrew Morton , Peter Zijlstra , Darren Hart , LKML , linux-edac , Linux-MM Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 12, 2021 at 9:16 AM Luck, Tony wrote: > > On Tue, Jan 12, 2021 at 09:00:14AM -0800, Andy Lutomirski wrote: > > > On Jan 11, 2021, at 2:21 PM, Luck, Tony wrote: > > > > > > =EF=BB=BFOn Mon, Jan 11, 2021 at 02:11:56PM -0800, Andy Lutomirski wr= ote: > > >> > > >>>> On Jan 11, 2021, at 1:45 PM, Tony Luck wrote= : > > >>> > > >>> =EF=BB=BFRecovery action when get_user() triggers a machine check u= ses the fixup > > >>> path to make get_user() return -EFAULT. Also queue_task_work() set= s up > > >>> so that kill_me_maybe() will be called on return to user mode to se= nd a > > >>> SIGBUS to the current process. > > >>> > > >>> But there are places in the kernel where the code assumes that this > > >>> EFAULT return was simply because of a page fault. The code takes so= me > > >>> action to fix that, and then retries the access. This results in a = second > > >>> machine check. > > >>> > > >>> While processing this second machine check queue_task_work() is cal= led > > >>> again. But since this uses the same callback_head structure that > > >>> was used in the first call, the net result is an entry on the > > >>> current->task_works list that points to itself. > > >> > > >> Is this happening in pagefault_disable context or normal sleepable f= ault context? If the latter, maybe we should reconsider finding a way for = the machine check code to do its work inline instead of deferring it. > > > > > > The first machine check is in pagefault_disable() context. > > > > > > static int get_futex_value_locked(u32 *dest, u32 __user *from) > > > { > > > int ret; > > > > > > pagefault_disable(); > > > ret =3D __get_user(*dest, from); > > > > I have very mixed feelings as to whether we should even try to recover > > from the first failure like this. If we actually want to try to > > recover, perhaps we should instead arrange for the second MCE to > > recover successfully instead of panicking. > > Well we obviously have to "recover" from the first machine check > in order to get to the second. Are you saying you don't like the > different return value from get_user()? > > In my initial playing around with this I just had the second machine > check simply skip the task_work_add(). This worked for this case, but > only because there wasn't a third, fourth, etc. access to the poisoned > data. If the caller keeps peeking, then we'll keep taking more machine > checks - possibly forever. > > Even if we do recover with just one extra machine check ... that's one > more than was necessary. Well, we need to do *something* when the first __get_user() trips the #MC. It would be nice if we could actually fix up the page tables inside the #MC handler, but, if we're in a pagefault_disable() context we might have locks held. Heck, we could have the pagetable lock held, be inside NMI, etc. Skipping the task_work_add() might actually make sense if we get a second one. We won't actually infinite loop in pagefault_disable() context -- if we would, then we would also infinite loop just from a regular page fault, too.