Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754473AbbDTPqM (ORCPT ); Mon, 20 Apr 2015 11:46:12 -0400 Received: from mailout4.w1.samsung.com ([210.118.77.14]:30735 "EHLO mailout4.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751020AbbDTPqJ (ORCPT ); Mon, 20 Apr 2015 11:46:09 -0400 X-AuditID: cbfec7f4-f79c56d0000012ee-14-55351f3f32eb From: Andrey Ryabinin To: Catalin Marinas , Will Deacon Cc: Linus Torvalds , "Paul E. McKenney" , Peter Zijlstra , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Andrey Ryabinin Subject: [PATCH] arm64: Implement 1-,2- byte smp_load_acquire and smp_store_release Date: Mon, 20 Apr 2015 18:45:53 +0300 Message-id: <1429544753-4120-1-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 2.3.5 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupiluLIzCtJLcpLzFFi42I5/e/4FV17edNQgw1HGS22/XrEZvF+WQ+j xabH11gtLu+aw2bxdvN3VovjvQeYLB71vWW3ePnxBIsDh8eaeWsYPTav0PI4MeM3i8eDQ5tZ PDYvqffo27KK0ePzJrkA9igum5TUnMyy1CJ9uwSujF+vpjAWbBWoWLD+BWMD4wKeLkZODgkB E4l/01czQthiEhfurWfrYuTiEBJYyigx++Y9dginiUli8u5Z7CBVbAJ6Ev9mbWcDsUUEAiTa 2o6DdTAL/GOUmLj6MFiRsECoxKkrZ5hAbBYBVYkZG/6wgNi8Aq4SjydMYIVYJydx9etl9gmM 3AsYGVYxiqaWJhcUJ6XnGuoVJ+YWl+al6yXn525ihITQlx2Mi49ZHWIU4GBU4uFdsMIkVIg1 say4MvcQowQHs5IIryC7aagQb0piZVVqUX58UWlOavEhRmkOFiVx3rm73ocICaQnlqRmp6YW pBbBZJk4OKUaGJ04TB+FGgkKH87sMjJoK+V4+1339RWOc0K/E0IOm984uPZI9CfdKzsjq9Ou nMgKS3p6VeYI49683fMW83v8/GdTsZS7+P8EXo8NEbq1U9gl4/60ZTfN5GFZV6xbMN+fN8A8 dU+X7vJJ5/5ejXkyYc2TzlP2bA/kDeYxHbBfsbzs+PMjm060NiuxFGckGmoxFxUnAgBf9QZG HQIAAA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2288 Lines: 63 commit 47933ad41a86 ("arch: Introduce smp_load_acquire(), smp_store_release()") allowed only 4- and 8-byte smp_load_acquire, smp_store_release. So 1- and 2-byte cases weren't implemented in arm64. Later commit 536fa402221f ("compiler: Allow 1- and 2-byte smp_load_acquire() and smp_store_release()") allowed to use 1 and 2 byte smp_load_acquire and smp_store_release by adjusting the definition of __native_word(). However, 1-,2- byte cases in arm64 version left unimplemented. Commit 8053871d0f7f ("smp: Fix smp_call_function_single_async() locking") started to use smp_load_acquire() to load 2-bytes csd->flags. That crashes arm64 kernel during the boot. Implement 1,2 byte cases in arm64's smp_load_acquire() and smp_store_release() to fix this. Signed-off-by: Andrey Ryabinin --- arch/arm64/include/asm/barrier.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index a5abb00..71f19c4 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -65,6 +65,14 @@ do { \ do { \ compiletime_assert_atomic_type(*p); \ switch (sizeof(*p)) { \ + case 1: \ + asm volatile ("stlrb %w1, %0" \ + : "=Q" (*p) : "r" (v) : "memory"); \ + break; \ + case 2: \ + asm volatile ("stlrh %w1, %0" \ + : "=Q" (*p) : "r" (v) : "memory"); \ + break; \ case 4: \ asm volatile ("stlr %w1, %0" \ : "=Q" (*p) : "r" (v) : "memory"); \ @@ -81,6 +89,14 @@ do { \ typeof(*p) ___p1; \ compiletime_assert_atomic_type(*p); \ switch (sizeof(*p)) { \ + case 1: \ + asm volatile ("ldarb %w0, %1" \ + : "=r" (___p1) : "Q" (*p) : "memory"); \ + break; \ + case 2: \ + asm volatile ("ldarh %w0, %1" \ + : "=r" (___p1) : "Q" (*p) : "memory"); \ + break; \ case 4: \ asm volatile ("ldar %w0, %1" \ : "=r" (___p1) : "Q" (*p) : "memory"); \ -- 2.3.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/