Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp434102imu; Wed, 12 Dec 2018 20:47:57 -0800 (PST) X-Google-Smtp-Source: AFSGD/WpGHvDNDrYU83YrsAAEhRZX6mtlQQ+uzJ+F6mpzEh6eWK6a44Sl27DLUS75NXTrT6ZAWbo X-Received: by 2002:a17:902:28e6:: with SMTP id f93mr22015135plb.239.1544676477197; Wed, 12 Dec 2018 20:47:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544676477; cv=none; d=google.com; s=arc-20160816; b=uDFBFg5iIoYlwdFOLEMbaM2nW8RQd/pU1BPFR701ULf7iaTIoVcOp3fsn1awUamv4q daOJASLKaXJz5B1AUbNSHWZ9T3Npervm66UOpT8DjxZS85wN1317trThWEgFkL3JHhei 2O5/UOEwfcsKvmoD7iLm4wW98/I1q7MMHpTD3JJxOxAELcXo2BpsGlwTsuZtCGfXdvZl AsHXc0YFFMbfw4Mg5UzBjc8wrb7ysaIKVVGLZKXjkQEdPY0lZFLQrn7TUgZtaE7E1l7Y dIZw+yb6scgQD6EgqQlem4bnZXsgBCwGTvFl0OtBtggP2NecWEX1mL8+TBhtcMn0NEgb Q1vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ehkYjFGHMtoQMvbYCqVM4sRAHPSVni3PXf8dYKXINJM=; b=yIkY2VirnfCTJOQCWVSwqZzhkPsxdgNv1+l0bqWoMAefQdx6nZnDpn310FmRvrklcZ 9BMSp5V5P2jyZoXfiD+2dSRNAUIBsGPWrPgdzvLbtqQ0K2tjPp1YVfsD9SstfTDXtvi8 LvHmB+dWXGj03N7rBv1WR3BHRJbE9Zo3gDl3FxtS7ca41E9BNEJSP/wj++l2Q7/mJbK2 aHC8T9UGW4ehhFGhpNbR22Q5f8ZPCGNG4LuNY0eNdtfEgKQXSsUgTDsmwIi6sTtNUzlE ks+H44mhzg+im4p45syvHBmtNZAij+VQb7uGd9AvMtXo7FKLUSmWFqiYmwrtfsb6fgrf wrIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="CORk/ChW"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w12si672121pgs.182.2018.12.12.20.47.42; Wed, 12 Dec 2018 20:47:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="CORk/ChW"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728802AbeLMEpE (ORCPT + 99 others); Wed, 12 Dec 2018 23:45:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:43914 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728615AbeLMEbG (ORCPT ); Wed, 12 Dec 2018 23:31:06 -0500 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B158E20672; Thu, 13 Dec 2018 04:31:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1544675465; bh=iKpdsWNrXAPsASuUodZcuGVbdk04VYHiKmO/afWLtpI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CORk/ChWUXeV6jzVqo52LJ6wPWI2LMPHfL3uPDoY/3ZT04wbkddayJBnmL6PMSHDB dgRYqxrodubWYq/UQlnMd+JpQ3V+7qiMvpTAfqjiKaKpFDPO0gmIpJYI96VpeEPeis aUqycvAelkKHG6fltYnosvbOx1I/6LOvdaU+VdIk= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Jose Abreu , Alexey Brodkin , Joao Pinto , David Laight , Vineet Gupta , Sasha Levin , linux-snps-arc@lists.infradead.org Subject: [PATCH AUTOSEL 4.14 07/41] ARC: io.h: Implement reads{x}()/writes{x}() Date: Wed, 12 Dec 2018 23:30:20 -0500 Message-Id: <20181213043054.75891-7-sashal@kernel.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181213043054.75891-1-sashal@kernel.org> References: <20181213043054.75891-1-sashal@kernel.org> MIME-Version: 1.0 X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jose Abreu [ Upstream commit 10d443431dc2bb733cf7add99b453e3fb9047a2e ] Some ARC CPU's do not support unaligned loads/stores. Currently, generic implementation of reads{b/w/l}()/writes{b/w/l}() is being used with ARC. This can lead to misfunction of some drivers as generic functions do a plain dereference of a pointer that can be unaligned. Let's use {get/put}_unaligned() helpers instead of plain dereference of pointer in order to fix. The helpers allow to get and store data from an unaligned address whilst preserving the CPU internal alignment. According to [1], the use of these helpers are costly in terms of performance so we added an initial check for a buffer already aligned so that the usage of the helpers can be avoided, when possible. [1] Documentation/unaligned-memory-access.txt Cc: Alexey Brodkin Cc: Joao Pinto Cc: David Laight Tested-by: Vitor Soares Signed-off-by: Jose Abreu Signed-off-by: Vineet Gupta Signed-off-by: Sasha Levin --- arch/arc/include/asm/io.h | 72 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/arch/arc/include/asm/io.h b/arch/arc/include/asm/io.h index c22b181e8206..2f39d9b3886e 100644 --- a/arch/arc/include/asm/io.h +++ b/arch/arc/include/asm/io.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef CONFIG_ISA_ARCV2 #include @@ -94,6 +95,42 @@ static inline u32 __raw_readl(const volatile void __iomem *addr) return w; } +/* + * {read,write}s{b,w,l}() repeatedly access the same IO address in + * native endianness in 8-, 16-, 32-bit chunks {into,from} memory, + * @count times + */ +#define __raw_readsx(t,f) \ +static inline void __raw_reads##f(const volatile void __iomem *addr, \ + void *ptr, unsigned int count) \ +{ \ + bool is_aligned = ((unsigned long)ptr % ((t) / 8)) == 0; \ + u##t *buf = ptr; \ + \ + if (!count) \ + return; \ + \ + /* Some ARC CPU's don't support unaligned accesses */ \ + if (is_aligned) { \ + do { \ + u##t x = __raw_read##f(addr); \ + *buf++ = x; \ + } while (--count); \ + } else { \ + do { \ + u##t x = __raw_read##f(addr); \ + put_unaligned(x, buf++); \ + } while (--count); \ + } \ +} + +#define __raw_readsb __raw_readsb +__raw_readsx(8, b) +#define __raw_readsw __raw_readsw +__raw_readsx(16, w) +#define __raw_readsl __raw_readsl +__raw_readsx(32, l) + #define __raw_writeb __raw_writeb static inline void __raw_writeb(u8 b, volatile void __iomem *addr) { @@ -126,6 +163,35 @@ static inline void __raw_writel(u32 w, volatile void __iomem *addr) } +#define __raw_writesx(t,f) \ +static inline void __raw_writes##f(volatile void __iomem *addr, \ + const void *ptr, unsigned int count) \ +{ \ + bool is_aligned = ((unsigned long)ptr % ((t) / 8)) == 0; \ + const u##t *buf = ptr; \ + \ + if (!count) \ + return; \ + \ + /* Some ARC CPU's don't support unaligned accesses */ \ + if (is_aligned) { \ + do { \ + __raw_write##f(*buf++, addr); \ + } while (--count); \ + } else { \ + do { \ + __raw_write##f(get_unaligned(buf++), addr); \ + } while (--count); \ + } \ +} + +#define __raw_writesb __raw_writesb +__raw_writesx(8, b) +#define __raw_writesw __raw_writesw +__raw_writesx(16, w) +#define __raw_writesl __raw_writesl +__raw_writesx(32, l) + /* * MMIO can also get buffered/optimized in micro-arch, so barriers needed * Based on ARM model for the typical use case @@ -141,10 +207,16 @@ static inline void __raw_writel(u32 w, volatile void __iomem *addr) #define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; }) #define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; }) #define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; }) +#define readsb(p,d,l) ({ __raw_readsb(p,d,l); __iormb(); }) +#define readsw(p,d,l) ({ __raw_readsw(p,d,l); __iormb(); }) +#define readsl(p,d,l) ({ __raw_readsl(p,d,l); __iormb(); }) #define writeb(v,c) ({ __iowmb(); writeb_relaxed(v,c); }) #define writew(v,c) ({ __iowmb(); writew_relaxed(v,c); }) #define writel(v,c) ({ __iowmb(); writel_relaxed(v,c); }) +#define writesb(p,d,l) ({ __iowmb(); __raw_writesb(p,d,l); }) +#define writesw(p,d,l) ({ __iowmb(); __raw_writesw(p,d,l); }) +#define writesl(p,d,l) ({ __iowmb(); __raw_writesl(p,d,l); }) /* * Relaxed API for drivers which can handle barrier ordering themselves -- 2.19.1