Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3197345imm; Sun, 29 Jul 2018 12:36:18 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdr7xNIBxJGOI/W4ljs8COBuqKPpQb866LUfUpHDXlKlpjfT+XpV8BDHTHscMWV+B+t+uaS X-Received: by 2002:a63:b95c:: with SMTP id v28-v6mr13673884pgo.162.1532892978467; Sun, 29 Jul 2018 12:36:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532892978; cv=none; d=google.com; s=arc-20160816; b=xiHW7soGLTcayAyktlo6UOoRkl0gC6Sw33Av8ju9hNdPTZ5PuBYz9tWPa8zZfQw9Rk lXmqaAL0OQxndNJcF62nCaFbr+5dj5GzCpZtG603JTjhF3Z/tyKjbwdpFJHE6y0QXkHH MJiCvGOclEg+v7ipY5udZlPtqpPEG9eicqTGXVpqdtl+5V/ENudOkKJBhtj8ec3vzPK5 Rdv+hQ5fRdr/r0TGN1tbKSjP6RFaXxjfPVjtFdn5X1lBfRc9HNsLqsza1TH4syGJU9qY e2LbUbg13NAneNJDFvC8NN5Pj3ZgtZOAuTpY44fFm+GWM2vk/gvzrc+mncNDZ+sOv5fT 7ouQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:from:subject:cc:to:message-id:date :arc-authentication-results; bh=lhWR3wgopH+Osdp8pDSSLBtxbyVX/TrfICBa6xY9C9A=; b=qzdFzsqz96Vbf+v+mMbhajJC75EZyTklNO+Ixjpo8WIfgPKRfjHG+k9qv8Nh15ewXL E0SKF88GJ38SZFT3Q9Eg7fBpBFbneMT0GsiQFsNICEIqZhChQ2wh6dqEP2cteRC7W9wk jksU3Y8G90BGpgbpZSU5O4Tn7q+ZYU7PSbkxbVqCOsRQGjBFwB2tVcUnTzhr+6jwqd50 adPq05uLVtIJMhmKkT61qFKN2Ki6SKNXj+DdDHqNnEq8P+LwqluGNj08MMyAhipwQs2i k3cTMB5zk9+wFpF3V6vIBr2PffugEdyRl9LHq2pekayG8wCIkwEJdonrd4GcZhCgp7hF gq0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r192-v6si8825184pgr.17.2018.07.29.12.36.02; Sun, 29 Jul 2018 12:36:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728497AbeG2VGn (ORCPT + 99 others); Sun, 29 Jul 2018 17:06:43 -0400 Received: from shards.monkeyblade.net ([23.128.96.9]:43742 "EHLO shards.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726586AbeG2VGn (ORCPT ); Sun, 29 Jul 2018 17:06:43 -0400 Received: from localhost (unknown [IPv6:2603:3004:61b:dd00::7bb]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) (Authenticated sender: davem-davemloft) by shards.monkeyblade.net (Postfix) with ESMTPSA id D659713E0422C; Sun, 29 Jul 2018 12:35:11 -0700 (PDT) Date: Sun, 29 Jul 2018 12:35:10 -0700 (PDT) Message-Id: <20180729.123510.1847228041867717113.davem@davemloft.net> To: Eugeniy.Paltsev@synopsys.com Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, Jose.Abreu@synopsys.com, alexandre.torgue@st.com, peppe.cavallaro@st.com Subject: Re: [PATCH] NET: stmmac: align DMA stuff to largest cache line length From: David Miller In-Reply-To: <20180726120537.4664-1-Eugeniy.Paltsev@synopsys.com> References: <20180726120537.4664-1-Eugeniy.Paltsev@synopsys.com> X-Mailer: Mew version 6.7 on Emacs 26 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12 (shards.monkeyblade.net [149.20.54.216]); Sun, 29 Jul 2018 12:35:12 -0700 (PDT) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eugeniy Paltsev Date: Thu, 26 Jul 2018 15:05:37 +0300 > As for today STMMAC_ALIGN macro (which is used to align DMA stuff) > relies on L1 line length (L1_CACHE_BYTES). > This isn't correct in case of system with several cache levels > which might have L1 cache line length smaller than L2 line. This > can lead to sharing one cache line between DMA buffer and other > data, so we can lose this data while invalidate DMA buffer before > DMA transaction. > > Fix that by using SMP_CACHE_BYTES instead of L1_CACHE_BYTES for > aligning. > > Signed-off-by: Eugeniy Paltsev This is definitely an improvement, so applied and queued up for -stable. There is also dma_get_cache_alignment(), so maybe we can eventually use that here instead.