Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4573840imu; Tue, 29 Jan 2019 04:03:54 -0800 (PST) X-Google-Smtp-Source: ALg8bN4Io8rtfEu2A8RbPiboTgW8oy/55pXSfb22zFmgs/6UxaXlRKHdWgZFGt7Dh96DE2dgM5c9 X-Received: by 2002:a17:902:7848:: with SMTP id e8mr26330244pln.100.1548763434124; Tue, 29 Jan 2019 04:03:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548763434; cv=none; d=google.com; s=arc-20160816; b=AIDwiH/TZ6EAd64Hxz+L135IQyEV4uv0cZghZt0E7eTADRBUk84znmd5WrUS0jaxkG Q+5325fyMeJP6dW6OUYAyr7f5QUPaahT3kAVbjbjzG/3FIPjO5fj+xigTw38AJ/h6Pju bS7Q+Jumdrj4ynKPbWwmNaaQliMR/Q3GbEDV2Q22L/l2s9o/w3+tN+PQGOhvx2iWG7ih XgUbqiEOEz8S8AqZCm17QURnTwSgBYIze7Mq5UYE8krozH59XRZtPWUappy6EiBjSEAZ Y+rYuyTwjH+0Qu6kfLbn7slJmO+3lvnpLU3N9AARB9p3GGxSQ/BfWEHR13z8XfU1gMv2 Pb7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=8Ii1zbET5dYNgKvRVCYFQULvlWNiDe6L67YB+DHTKNQ=; b=q1CkUevUlWyR+j5uqED0IiE/n/LEAh3JWtfMw3cegT3IrWFbJYCqyey2EzJBkP7izn lwUHmKFZrQND/F3mc5IfiAgslLRMG2qRRL9aZK19JNY9g8Uk0EBinI9UHI9boFjJqhED GYtxCDkAh9C2oQuxvo6U+cUkn6yJEW8n2As/PGbZSx+lkfQqG+EnInoTBKCvv4Tj/jjt YWdRZLKYQdimGUpnw53afLVeW2a1jEYqwuf+tFTm4QpTzAJDD2aT7azSe9Sdnkmnp7fi hNDy9IKyAMUITftmWDatAPsl81D+UsD9TBjI5XsH92IlzX82NK6YYPV5gjsvEOOoOWcN d5ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="CTnH/owr"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i7si36279124pgc.144.2019.01.29.04.03.35; Tue, 29 Jan 2019 04:03:54 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="CTnH/owr"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730804AbfA2MDX (ORCPT + 99 others); Tue, 29 Jan 2019 07:03:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:33934 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729653AbfA2Lnv (ORCPT ); Tue, 29 Jan 2019 06:43:51 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A6C4E20857; Tue, 29 Jan 2019 11:43:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1548762230; bh=GC67eXWANX5gTJLjl7GTIbvGN6gks8+ycWzBkE2Lf6c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CTnH/owrrXb3rc7Safarmh15OwWKEB+54ffTh8Dp3/btYtGdeXTZQXITqlngOmI4I FjEWH1yNlFb2MXeWUcQDdISX1m5BznEqqzrxsqOMlbiMSH2ukLsADl2GkIPCSb8kCW Er7B1qoNaqta0zISqhFSZ/4qBMNtPeG/f+p3Fz98= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eugeniy Paltsev , Vineet Gupta Subject: [PATCH 4.19 036/103] ARCv2: lib: memeset: fix doing prefetchw outside of buffer Date: Tue, 29 Jan 2019 12:35:13 +0100 Message-Id: <20190129113201.707328936@linuxfoundation.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190129113159.567154026@linuxfoundation.org> References: <20190129113159.567154026@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eugeniy Paltsev commit e6a72b7daeeb521753803550f0ed711152bb2555 upstream. ARCv2 optimized memset uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause issues in SMP config when next line was already owned by other core. Fix the issue by avoiding the PREFETCHW Some more details: The current code has 3 logical loops (ignroing the unaligned part) (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC (b) Loop for 32 x 2 bytes with PREFETCHW (c) any left over bytes loop (a) was already eliding the last 64 bytes, so PREALLOC was safe. The fix was removing PREFETCW from (b). Another potential issue (applicable to configs with 32 or 128 byte L1 cache line) is that PREALLOC assumes 64 byte cache line and may not do the right thing specially for 32b. While it would be easy to adapt, there are no known configs with those lie sizes, so for now, just compile out PREALLOC in such cases. Signed-off-by: Eugeniy Paltsev Cc: stable@vger.kernel.org #4.4+ Signed-off-by: Vineet Gupta [vgupta: rewrote changelog, used asm .macro vs. "C" macro] Signed-off-by: Greg Kroah-Hartman --- arch/arc/lib/memset-archs.S | 40 ++++++++++++++++++++++++++++++++-------- 1 file changed, 32 insertions(+), 8 deletions(-) --- a/arch/arc/lib/memset-archs.S +++ b/arch/arc/lib/memset-archs.S @@ -7,11 +7,39 @@ */ #include +#include -#undef PREALLOC_NOT_AVAIL +/* + * The memset implementation below is optimized to use prefetchw and prealloc + * instruction in case of CPU with 64B L1 data cache line (L1_CACHE_SHIFT == 6) + * If you want to implement optimized memset for other possible L1 data cache + * line lengths (32B and 128B) you should rewrite code carefully checking + * we don't call any prefetchw/prealloc instruction for L1 cache lines which + * don't belongs to memset area. + */ + +#if L1_CACHE_SHIFT == 6 + +.macro PREALLOC_INSTR reg, off + prealloc [\reg, \off] +.endm + +.macro PREFETCHW_INSTR reg, off + prefetchw [\reg, \off] +.endm + +#else + +.macro PREALLOC_INSTR +.endm + +.macro PREFETCHW_INSTR +.endm + +#endif ENTRY_CFI(memset) - prefetchw [r0] ; Prefetch the write location + PREFETCHW_INSTR r0, 0 ; Prefetch the first write location mov.f 0, r2 ;;; if size is zero jz.d [blink] @@ -48,11 +76,8 @@ ENTRY_CFI(memset) lpnz @.Lset64bytes ;; LOOP START -#ifdef PREALLOC_NOT_AVAIL - prefetchw [r3, 64] ;Prefetch the next write location -#else - prealloc [r3, 64] -#endif + PREALLOC_INSTR r3, 64 ; alloc next line w/o fetching + #ifdef CONFIG_ARC_HAS_LL64 std.ab r4, [r3, 8] std.ab r4, [r3, 8] @@ -85,7 +110,6 @@ ENTRY_CFI(memset) lsr.f lp_count, r2, 5 ;Last remaining max 124 bytes lpnz .Lset32bytes ;; LOOP START - prefetchw [r3, 32] ;Prefetch the next write location #ifdef CONFIG_ARC_HAS_LL64 std.ab r4, [r3, 8] std.ab r4, [r3, 8]