Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753605AbdFQW1B (ORCPT ); Sat, 17 Jun 2017 18:27:01 -0400 Received: from omzsmtpe02.verizonbusiness.com ([199.249.25.209]:41488 "EHLO omzsmtpe02.verizonbusiness.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753465AbdFQW0x (ORCPT ); Sat, 17 Jun 2017 18:26:53 -0400 X-IronPort-Anti-Spam-Filtered: false From: "Levin, Alexander (Sasha Levin)" Cc: Heiko Carstens , Martin Schwidefsky , "Levin, Alexander (Sasha Levin)" X-IronPort-AV: E=Sophos;i="5.39,316,1493683200"; d="scan'208";a="1468717038" X-Host: ranger.odc.vzwcorp.com To: "stable@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: [PATCH for v4.9 LTS 77/86] s390/ctl_reg: make __ctl_load a full memory barrier Thread-Topic: [PATCH for v4.9 LTS 77/86] s390/ctl_reg: make __ctl_load a full memory barrier Thread-Index: AQHS57iLpOu9ObXe1UikWMpRW1JFZQ== Date: Sat, 17 Jun 2017 22:24:54 +0000 Message-ID: <20170617222420.19316-77-alexander.levin@verizon.com> References: <20170617222420.19316-1-alexander.levin@verizon.com> In-Reply-To: <20170617222420.19316-1-alexander.levin@verizon.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-messagesentrepresentingtype: 1 x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.144.60.250] Content-Type: text/plain; charset="iso-8859-1" MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by mail.home.local id v5HMSKq4021844 Content-Length: 1473 Lines: 40 From: Heiko Carstens [ Upstream commit e991c24d68b8c0ba297eeb7af80b1e398e98c33f ] We have quite a lot of code that depends on the order of the __ctl_load inline assemby and subsequent memory accesses, like e.g. disabling lowcore protection and the writing to lowcore. Since the __ctl_load macro does not have memory barrier semantics, nor any other dependencies the compiler is, theoretically, free to shuffle code around. Or in other words: storing to lowcore could happen before lowcore protection is disabled. In order to avoid this class of potential bugs simply add a full memory barrier to the __ctl_load macro. Signed-off-by: Heiko Carstens Signed-off-by: Martin Schwidefsky Signed-off-by: Sasha Levin --- arch/s390/include/asm/ctl_reg.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/s390/include/asm/ctl_reg.h b/arch/s390/include/asm/ctl_reg.h index d7697ab802f6..8e136b88cdf4 100644 --- a/arch/s390/include/asm/ctl_reg.h +++ b/arch/s390/include/asm/ctl_reg.h @@ -15,7 +15,9 @@ BUILD_BUG_ON(sizeof(addrtype) != (high - low + 1) * sizeof(long));\ asm volatile( \ " lctlg %1,%2,%0\n" \ - : : "Q" (*(addrtype *)(&array)), "i" (low), "i" (high));\ + : \ + : "Q" (*(addrtype *)(&array)), "i" (low), "i" (high) \ + : "memory"); \ } #define __ctl_store(array, low, high) { \ -- 2.11.0