Received: by 2002:ab2:710b:0:b0:1ef:a325:1205 with SMTP id z11csp1208677lql; Tue, 12 Mar 2024 10:08:37 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCUT8171uNjKPmTNUd9xWIJRX1eCrX37SNalW0j6zHg3USS6xp6jab8nwtEu6IXy9mfxhDqv7cfAJV33oC7fKN6uVf47umFdySA7JlsZKQ== X-Google-Smtp-Source: AGHT+IHBe6tqtjzXcMdpQ85MdQyutNtVzT7VVwNBA7tw80YHsSH8PiacqzQKt2smj6C6oTgMgRLw X-Received: by 2002:a05:6a21:328f:b0:1a1:87df:beff with SMTP id yt15-20020a056a21328f00b001a187dfbeffmr8165592pzb.4.1710263316726; Tue, 12 Mar 2024 10:08:36 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1710263316; cv=pass; d=google.com; s=arc-20160816; b=0/ajidUUKoHhF7fAIFDFEab+iWYHPscHWILIj/TRerA8aolVQujgJ8hcL7cmYKoxxR S8UDVkmRxriDqOIAq5lTYIohg/p48uVzJ0M11EvYeS/y/Mwkh26fmVjksg3TGMpLGw6c qzJCA7RtpRGTZqWimg9K8Nuzde8r4Od3ofcOsjrfWK9vXb5gtwnJNCOysrSDi5TLcAFR J5DDv24RzqwcZ3CbxvNhYBsoI1X0HhY8iRL/nf3SSqFE3tW4pD4FgbRNN7ngqvmwVVNZ Dl/et1fO+5Z8l8oo/GsOKO624dBipWoCfjnkRrEE/xm0lY/Y8nsnoLsgQsn4PV8AswGp TKTw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :dkim-signature; bh=PImBKvEkplXQtr3VSaidgr6kxa6jFNk7CGiKtDSsyy8=; fh=X93N0qTMnV1vvAYRWGpTIDSBOTApYgV76ZcRLsho6Sg=; b=zYHSBDnYxrTr3RbiaYZraN2ALBVu5QvSJYGdrpDK8ANS8fTTw7/k0hl2YuGMrepuxx +yQziry8ov6LnjpU8svvsmTDnggX6yv/UvNXODtzmLeyfnd4akZzWqVGc9B4kvogVEor +d5ch5Jh2gKPqtfnKn9q3IlyqF8ppKIRfVOfYJLOup9s52Z3l9bqyZuEMZBwU8yBLJHF rKV82CcuF3RLzdXhwjcn+Mam03bcr5gzbnc/vaJjT+ShJvgqj2uPNHsOJUe+fcFDozi+ Ot0j74llbbmvPpvUR8lhPAqOYQix4TveWlm7ljsIeSkK7pE69R/jcpDiWjKw8Ou+oJkT zPQg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@raspberrypi.com header.s=google header.b=UFJllf71; arc=pass (i=1 spf=pass spfdomain=raspberrypi.com dkim=pass dkdomain=raspberrypi.com dmarc=pass fromdomain=raspberrypi.com); spf=pass (google.com: domain of linux-kernel+bounces-100572-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-100572-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=raspberrypi.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id p56-20020a056a0026f800b006e6b4c8f70bsi144676pfw.177.2024.03.12.10.08.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 10:08:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-100572-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@raspberrypi.com header.s=google header.b=UFJllf71; arc=pass (i=1 spf=pass spfdomain=raspberrypi.com dkim=pass dkdomain=raspberrypi.com dmarc=pass fromdomain=raspberrypi.com); spf=pass (google.com: domain of linux-kernel+bounces-100572-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-100572-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=raspberrypi.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 0642D2854A4 for ; Tue, 12 Mar 2024 17:08:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A172013848F; Tue, 12 Mar 2024 17:05:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=raspberrypi.com header.i=@raspberrypi.com header.b="UFJllf71" Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com [209.85.219.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18DCC137C28 for ; Tue, 12 Mar 2024 17:05:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.181 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710263128; cv=none; b=jrmTceUZ5rJiFEQQbkgRwCzkRqDodhXWMdR/ogY7wJSbbW6Up/kNQD4bIiJ6mRjIr4Y5u4+upPpUJgrTPVQ7jpkgGg/6aha8d6UdCKnz1WobGIacBQJqOJ/9dVQv5ngrqg5F6aB/53SzUX+tgdVRBtXcawGfCOuK53He+kSfXOU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710263128; c=relaxed/simple; bh=nETdnNiXbfJYVcvR+8BdhDfh+cHHu3duiWQBr9gQ/cM=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=Raz2m8SooBfoOEJj0ata8j87a16+685JxeI/N70w1IwEOuzV0uJUYZCJL9z/VZGSC7x6RYW5AvkWao9cRyx7AF+Uxvwr96WNoJKjpf4iSLuvmVT3fW5PvzIUM6kTOa6xvCeO2jcS73aShfLig8kVwdnRFdCz4vE45KFlZ9pgyjk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=raspberrypi.com; spf=pass smtp.mailfrom=raspberrypi.com; dkim=pass (2048-bit key) header.d=raspberrypi.com header.i=@raspberrypi.com header.b=UFJllf71; arc=none smtp.client-ip=209.85.219.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=raspberrypi.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=raspberrypi.com Received: by mail-yb1-f181.google.com with SMTP id 3f1490d57ef6-dc74435c428so5602200276.2 for ; Tue, 12 Mar 2024 10:05:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raspberrypi.com; s=google; t=1710263124; x=1710867924; darn=vger.kernel.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=PImBKvEkplXQtr3VSaidgr6kxa6jFNk7CGiKtDSsyy8=; b=UFJllf718G4B9sOdF4Cfoihv1R5HDDsnaezlzN/8MOr1jbYZ//JBFBU0kqwKskxMZk YympBIQLyM9H98jhq9kEo2otYiuspmlhnhdWgNNugZrkMJE00syIqwwAWr9Re9zFPRuz b7MDBWY4rhd+uJFSMOQbFyMn5CZ/71GLzjNLF5oNFcX/TEViOS9PI05Y8ZVfzXMD6mpi N0VJlLMZwnbbe+7ob1CFbYiXnsDub5Bt5D052FIyyUVy5+c7bgpcCvtp0hWNc4/PpqLq wcuUMSb4fDMo6Uanoz3DbauwoKIs6gGCx9XW4KLok07MPPta1DtH0Qmmdf7wgJpVAUC9 36qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710263124; x=1710867924; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PImBKvEkplXQtr3VSaidgr6kxa6jFNk7CGiKtDSsyy8=; b=iVWkjMdkxxdYjNseLDsPNYRNjIQFeYNcqyXWiwdd2YybLTzCp7yhBAZxV6e36y8Xsv O6U6Lvf94Js//IR90w76ESz/7i0llcw5nDzLIlaTaaUOTPpZz/2TTMlYqNeKYcANLM8T C0oj2LVcn9XZR4jEUZrHkfoDBxDmvAG+b6JyAu5QLC9cwENbVouDzVOpNF1Iijs3hrTg CEjb+WhGl2RtlRSUUwx5yi6fbnIw6xkDEKlVrAUB9Jmq7uyiadOur9PYTIHh3G6nwCe/ EVtLCceICVjeN8Fo6T329MqnTG85MdReHI8HWFYIk7Y2p6cQFwFrmV2izoiw0awcQ/bo VsXg== X-Forwarded-Encrypted: i=1; AJvYcCXf1yeqjAmI0ADgteDLlMiMrijoUhpZGF6BlsX5WDgoG2bwwIKe8pn88MgZgLez6i9Dbd1LO13dAH/tvaK1iQRJCmWtBqAA2yOyzwkM X-Gm-Message-State: AOJu0YyN5jQ36rIjfj0LEOfSExuLQlDHWvPDPjyYVGDFWDqThINv2MwQ ijeCEfEFEf4TRcyZHHz+u2RiLRx5ONuVVfdPjCezpdcRa/uPjf5w0qti2Ecm2VfUwgG06F2e3zd LwXLCPVXe+NQLaq/usQpSWKRWtu3plVyexXybfA== X-Received: by 2002:a5b:945:0:b0:dcf:288e:21ca with SMTP id x5-20020a5b0945000000b00dcf288e21camr38042ybq.11.1710263123841; Tue, 12 Mar 2024 10:05:23 -0700 (PDT) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <5826eba6ab78b9cdba21c12853a85d5f9a6aab76.1710226514.git.andrea.porta@suse.com> In-Reply-To: <5826eba6ab78b9cdba21c12853a85d5f9a6aab76.1710226514.git.andrea.porta@suse.com> From: Dave Stevenson Date: Tue, 12 Mar 2024 17:05:07 +0000 Message-ID: Subject: Re: [PATCH v2 12/15] dmaengine: bcm2835: introduce multi platform support To: Andrea della Porta Cc: Vinod Koul , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Florian Fainelli , Ray Jui , Scott Branden , Broadcom internal kernel review list , Saenz Julienne , dmaengine@vger.kernel.org, devicetree@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Stefan Wahren Content-Type: text/plain; charset="UTF-8" Hi Andrea On Tue, 12 Mar 2024 at 09:13, Andrea della Porta wrote: > > This finally moves all platform specific stuff into a separate structure, > which is initialized on the OF compatible during probing. Since the DMA > control block is different on the BCM2711 platform, we introduce a common > control block to reserve the necessary space and adequate methods for > access. > > Signed-off-by: Stefan Wahren > Signed-off-by: Andrea della Porta > --- > drivers/dma/bcm2835-dma.c | 336 +++++++++++++++++++++++++++++--------- > 1 file changed, 260 insertions(+), 76 deletions(-) > > diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c > index 88ae5d05402e..b015eae29b08 100644 > --- a/drivers/dma/bcm2835-dma.c > +++ b/drivers/dma/bcm2835-dma.c > @@ -48,6 +48,11 @@ struct bcm2835_dmadev { > struct dma_device ddev; > void __iomem *base; > dma_addr_t zero_page; > + const struct bcm2835_dma_cfg *cfg; > +}; > + > +struct bcm_dma_cb { > + uint32_t rsvd[8]; > }; > > struct bcm2835_dma_cb { > @@ -61,7 +66,7 @@ struct bcm2835_dma_cb { > }; > > struct bcm2835_cb_entry { > - struct bcm2835_dma_cb *cb; > + struct bcm_dma_cb *cb; > dma_addr_t paddr; > }; > > @@ -82,6 +87,44 @@ struct bcm2835_chan { > bool is_lite_channel; > }; > > +struct bcm2835_dma_cfg { > + dma_addr_t addr_offset; > + u32 cs_reg; > + u32 cb_reg; > + u32 next_reg; > + u32 ti_reg; > + > + u32 wait_mask; > + u32 reset_mask; > + u32 int_mask; > + u32 active_mask; > + u32 abort_mask; > + u32 s_dreq_mask; > + u32 d_dreq_mask; > + > + u32 (*cb_get_length)(void *data); > + dma_addr_t (*cb_get_addr)(void *data, enum dma_transfer_direction); > + > + void (*cb_init)(void *data, struct bcm2835_chan *c, > + enum dma_transfer_direction, u32 src, u32 dst, > + bool zero_page); > + void (*cb_set_src)(void *data, enum dma_transfer_direction, u32 src); > + void (*cb_set_dst)(void *data, enum dma_transfer_direction, u32 dst); > + void (*cb_set_next)(void *data, u32 next); > + void (*cb_set_length)(void *data, u32 length); > + void (*cb_append_extra)(void *data, > + struct bcm2835_chan *c, > + enum dma_transfer_direction direction, > + bool cyclic, bool final, unsigned long flags); > + > + dma_addr_t (*to_cb_addr)(dma_addr_t addr); > + > + void (*chan_plat_init)(struct bcm2835_chan *c); > + dma_addr_t (*read_addr)(struct bcm2835_chan *c, > + enum dma_transfer_direction); > + u32 (*cs_flags)(struct bcm2835_chan *c); > +}; > + > struct bcm2835_desc { > struct bcm2835_chan *c; > struct virt_dma_desc vd; > @@ -215,6 +258,13 @@ static inline struct bcm2835_dmadev *to_bcm2835_dma_dev(struct dma_device *d) > return container_of(d, struct bcm2835_dmadev, ddev); > } > > +static inline const struct bcm2835_dma_cfg *to_bcm2835_cfg(struct dma_device *d) > +{ > + struct bcm2835_dmadev *od = container_of(d, struct bcm2835_dmadev, ddev); > + > + return od->cfg; > +} > + > static inline struct bcm2835_chan *to_bcm2835_dma_chan(struct dma_chan *c) > { > return container_of(c, struct bcm2835_chan, vc.chan); > @@ -292,6 +342,109 @@ static inline bool need_dst_incr(enum dma_transfer_direction direction) > return false; > } > > +static inline u32 bcm2835_dma_cb_get_length(void *data) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + return cb->length; > +} > + > +static inline dma_addr_t > +bcm2835_dma_cb_get_addr(void *data, enum dma_transfer_direction direction) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + if (direction == DMA_DEV_TO_MEM) > + return cb->dst; > + > + return cb->src; > +} > + > +static inline void > +bcm2835_dma_cb_init(void *data, struct bcm2835_chan *c, > + enum dma_transfer_direction direction, u32 src, u32 dst, > + bool zero_page) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + cb->info = bcm2835_dma_prepare_cb_info(c, direction, zero_page); > + cb->src = src; > + cb->dst = dst; > + cb->stride = 0; > + cb->next = 0; > +} > + > +static inline void > +bcm2835_dma_cb_set_src(void *data, enum dma_transfer_direction direction, > + u32 src) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + cb->src = src; > +} > + > +static inline void > +bcm2835_dma_cb_set_dst(void *data, enum dma_transfer_direction direction, > + u32 dst) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + cb->dst = dst; > +} > + > +static inline void bcm2835_dma_cb_set_next(void *data, u32 next) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + cb->next = next; > +} > + > +static inline void bcm2835_dma_cb_set_length(void *data, u32 length) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + cb->length = length; > +} > + > +static inline void > +bcm2835_dma_cb_append_extra(void *data, struct bcm2835_chan *c, > + enum dma_transfer_direction direction, > + bool cyclic, bool final, unsigned long flags) > +{ > + struct bcm2835_dma_cb *cb = data; > + > + cb->info |= bcm2835_dma_prepare_cb_extra(c, direction, cyclic, final, > + flags); > +} > + > +static inline dma_addr_t bcm2835_dma_to_cb_addr(dma_addr_t addr) > +{ > + return addr; > +} > + > +static void bcm2835_dma_chan_plat_init(struct bcm2835_chan *c) > +{ > + /* check in DEBUG register if this is a LITE channel */ > + if (readl(c->chan_base + BCM2835_DMA_DEBUG) & BCM2835_DMA_DEBUG_LITE) > + c->is_lite_channel = true; > +} > + > +static dma_addr_t bcm2835_dma_read_addr(struct bcm2835_chan *c, > + enum dma_transfer_direction direction) > +{ > + if (direction == DMA_MEM_TO_DEV) > + return readl(c->chan_base + BCM2835_DMA_SOURCE_AD); > + else if (direction == DMA_DEV_TO_MEM) > + return readl(c->chan_base + BCM2835_DMA_DEST_AD); > + > + return 0; > +} > + > +static u32 bcm2835_dma_cs_flags(struct bcm2835_chan *c) > +{ > + return BCM2835_DMA_CS_FLAGS(c->dreq); > +} > + > static void bcm2835_dma_free_cb_chain(struct bcm2835_desc *desc) > { > size_t i; > @@ -309,16 +462,19 @@ static void bcm2835_dma_desc_free(struct virt_dma_desc *vd) > } > > static bool bcm2835_dma_create_cb_set_length(struct dma_chan *chan, > - struct bcm2835_dma_cb *control_block, > + void *data, > size_t len, > size_t period_len, > size_t *total_len) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > size_t max_len = bcm2835_dma_max_frame_length(c); > > /* set the length taking lite-channel limitations into account */ > - control_block->length = min_t(u32, len, max_len); > + u32 length = min_t(u32, len, max_len); > + > + cfg->cb_set_length(data, length); > > /* finished if we have no period_length */ > if (!period_len) > @@ -333,14 +489,14 @@ static bool bcm2835_dma_create_cb_set_length(struct dma_chan *chan, > */ > > /* have we filled in period_length yet? */ > - if (*total_len + control_block->length < period_len) { > + if (*total_len + length < period_len) { > /* update number of bytes in this period so far */ > - *total_len += control_block->length; > + *total_len += length; > return false; > } > > /* calculate the length that remains to reach period_length */ > - control_block->length = period_len - *total_len; > + cfg->cb_set_length(data, period_len - *total_len); > > /* reset total_length for next period */ > *total_len = 0; > @@ -388,15 +544,14 @@ static struct bcm2835_desc *bcm2835_dma_create_cb_chain( > size_t buf_len, size_t period_len, > gfp_t gfp, unsigned long flags) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_dmadev *od = to_bcm2835_dma_dev(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > size_t len = buf_len, total_len; > size_t frame; > struct bcm2835_desc *d; > struct bcm2835_cb_entry *cb_entry; > - struct bcm2835_dma_cb *control_block; > - u32 extrainfo = bcm2835_dma_prepare_cb_extra(c, direction, cyclic, > - false, flags); > + struct bcm_dma_cb *control_block; > bool zero_page = false; > > if (!frames) > @@ -432,12 +587,7 @@ static struct bcm2835_desc *bcm2835_dma_create_cb_chain( > > /* fill in the control block */ > control_block = cb_entry->cb; > - control_block->info = bcm2835_dma_prepare_cb_info(c, direction, > - zero_page); > - control_block->src = src; > - control_block->dst = dst; > - control_block->stride = 0; > - control_block->next = 0; > + cfg->cb_init(control_block, c, src, dst, direction, zero_page); Can I ask how you've been testing these patches? This line was one of the bugs that I found during my work. The prototype for cb_init is + void (*cb_init)(void *data, struct bcm2835_chan *c, + enum dma_transfer_direction, u32 src, u32 dst, + bool zero_page); So this call has direction in the wrong place, which leads to quite comical failures. Thanks Dave > /* set up length in control_block if requested */ > if (buf_len) { > /* calculate length honoring period_length */ > @@ -445,31 +595,33 @@ static struct bcm2835_desc *bcm2835_dma_create_cb_chain( > chan, control_block, > len, period_len, &total_len)) { > /* add extrainfo bits in info */ > - control_block->info |= extrainfo; > + bcm2835_dma_cb_append_extra(control_block, c, > + direction, cyclic, > + false, flags); > } > > /* calculate new remaining length */ > - len -= control_block->length; > + len -= cfg->cb_get_length(control_block); > } > > /* link this the last controlblock */ > if (frame) > - d->cb_list[frame - 1].cb->next = cb_entry->paddr; > + cfg->cb_set_next(d->cb_list[frame - 1].cb, > + cb_entry->paddr); > > /* update src and dst and length */ > if (src && need_src_incr(direction)) > - src += control_block->length; > + src += cfg->cb_get_length(control_block); > if (dst && need_dst_incr(direction)) > - dst += control_block->length; > + dst += cfg->cb_get_length(control_block); > > /* Length of total transfer */ > - d->size += control_block->length; > + d->size += cfg->cb_get_length(control_block); > } > > /* the last frame requires extra flags */ > - extrainfo = bcm2835_dma_prepare_cb_extra(c, direction, cyclic, true, > - flags); > - d->cb_list[d->frames - 1].cb->info |= extrainfo; > + cfg->cb_append_extra(d->cb_list[d->frames - 1].cb, c, direction, cyclic, > + true, flags); > > /* detect a size mismatch */ > if (buf_len && d->size != buf_len) > @@ -489,6 +641,7 @@ static void bcm2835_dma_fill_cb_chain_with_sg( > struct scatterlist *sgl, > unsigned int sg_len) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > size_t len, max_len; > unsigned int i; > @@ -499,18 +652,19 @@ static void bcm2835_dma_fill_cb_chain_with_sg( > for_each_sg(sgl, sgent, sg_len, i) { > for (addr = sg_dma_address(sgent), len = sg_dma_len(sgent); > len > 0; > - addr += cb->cb->length, len -= cb->cb->length, cb++) { > + addr += cfg->cb_get_length(cb->cb), len -= cfg->cb_get_length(cb->cb), cb++) { > if (direction == DMA_DEV_TO_MEM) > - cb->cb->dst = addr; > + cfg->cb_set_dst(cb->cb, direction, addr); > else > - cb->cb->src = addr; > - cb->cb->length = min(len, max_len); > + cfg->cb_set_src(cb->cb, direction, addr); > + cfg->cb_set_length(cb->cb, min(len, max_len)); > } > } > } > > static void bcm2835_dma_abort(struct dma_chan *chan) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > void __iomem *chan_base = c->chan_base; > long timeout = 100; > @@ -519,41 +673,42 @@ static void bcm2835_dma_abort(struct dma_chan *chan) > * A zero control block address means the channel is idle. > * (The ACTIVE flag in the CS register is not a reliable indicator.) > */ > - if (!readl(chan_base + BCM2835_DMA_ADDR)) > + if (!readl(chan_base + cfg->cb_reg)) > return; > > /* We need to clear the next DMA block pending */ > - writel(0, chan_base + BCM2835_DMA_NEXTCB); > + writel(0, chan_base + cfg->next_reg); > > /* Abort the DMA, which needs to be enabled to complete */ > - writel(readl(chan_base + BCM2835_DMA_CS) | BCM2835_DMA_ABORT | BCM2835_DMA_ACTIVE, > - chan_base + BCM2835_DMA_CS); > + writel(readl(chan_base + cfg->cs_reg) | cfg->abort_mask | cfg->active_mask, > + chan_base + cfg->cs_reg); > > /* wait for DMA to be aborted */ > - while ((readl(chan_base + BCM2835_DMA_CS) & BCM2835_DMA_ABORT) && --timeout) > + while ((readl(chan_base + cfg->cs_reg) & cfg->abort_mask) && --timeout) > cpu_relax(); > > /* Write 0 to the active bit - Pause the DMA */ > - writel(readl(chan_base + BCM2835_DMA_CS) & ~BCM2835_DMA_ACTIVE, > - chan_base + BCM2835_DMA_CS); > + writel(readl(chan_base + cfg->cs_reg) & ~cfg->active_mask, > + chan_base + cfg->cs_reg); > > /* > * Peripheral might be stuck and fail to complete > * This is expected when dreqs are enabled but not asserted > * so only report error in non dreq case > */ > - if (!timeout && !(readl(chan_base + BCM2835_DMA_TI) & > - (BCM2835_DMA_S_DREQ | BCM2835_DMA_D_DREQ))) > + if (!timeout && !(readl(chan_base + cfg->ti_reg) & > + (cfg->s_dreq_mask | cfg->d_dreq_mask))) > dev_err(c->vc.chan.device->dev, > "failed to complete pause on dma %d (CS:%08x)\n", c->ch, > - readl(chan_base + BCM2835_DMA_CS)); > + readl(chan_base + cfg->cs_reg)); > > /* Set CS back to default state and reset the DMA */ > - writel(BCM2835_DMA_RESET, chan_base + BCM2835_DMA_CS); > + writel(cfg->reset_mask, chan_base + cfg->cs_reg); > } > > static void bcm2835_dma_start_desc(struct dma_chan *chan) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > struct virt_dma_desc *vd = vchan_next_desc(&c->vc); > > @@ -566,14 +721,15 @@ static void bcm2835_dma_start_desc(struct dma_chan *chan) > > c->desc = to_bcm2835_dma_desc(&vd->tx); > > - writel(c->desc->cb_list[0].paddr, c->chan_base + BCM2835_DMA_ADDR); > - writel(BCM2835_DMA_ACTIVE | BCM2835_DMA_CS_FLAGS(c->dreq), > - c->chan_base + BCM2835_DMA_CS); > + writel(cfg->to_cb_addr(c->desc->cb_list[0].paddr), c->chan_base + cfg->cb_reg); > + writel(cfg->active_mask | cfg->cs_flags(c), > + c->chan_base + cfg->cs_reg); > } > > static irqreturn_t bcm2835_dma_callback(int irq, void *data) > { > struct dma_chan *chan = data; > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > struct bcm2835_desc *d; > unsigned long flags; > @@ -581,9 +737,9 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data) > /* check the shared interrupt */ > if (c->irq_flags & IRQF_SHARED) { > /* check if the interrupt is enabled */ > - flags = readl(c->chan_base + BCM2835_DMA_CS); > + flags = readl(c->chan_base + cfg->cs_reg); > /* if not set then we are not the reason for the irq */ > - if (!(flags & BCM2835_DMA_INT)) > + if (!(flags & cfg->int_mask)) > return IRQ_NONE; > } > > @@ -596,9 +752,7 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data) > * if this IRQ handler is threaded.) If the channel is finished, it > * will remain idle despite the ACTIVE flag being set. > */ > - writel(BCM2835_DMA_INT | BCM2835_DMA_ACTIVE | > - BCM2835_DMA_CS_FLAGS(c->dreq), > - c->chan_base + BCM2835_DMA_CS); > + writel(cfg->int_mask | cfg->active_mask | cfg->cs_flags(c), c->chan_base + cfg->cs_reg); > > d = c->desc; > > @@ -606,7 +760,7 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data) > if (d->cyclic) { > /* call the cyclic callback */ > vchan_cyclic_callback(&d->vd); > - } else if (!readl(c->chan_base + BCM2835_DMA_ADDR)) { > + } else if (!readl(c->chan_base + cfg->cb_reg)) { > vchan_cookie_complete(&c->desc->vd); > bcm2835_dma_start_desc(chan); > } > @@ -629,7 +783,7 @@ static int bcm2835_dma_alloc_chan_resources(struct dma_chan *chan) > * (32 byte) aligned address (BCM2835 ARM Peripherals, sec. 4.2.1.1). > */ > c->cb_pool = dma_pool_create(dev_name(dev), dev, > - sizeof(struct bcm2835_dma_cb), 32, 0); > + sizeof(struct bcm_dma_cb), 32, 0); > if (!c->cb_pool) { > dev_err(dev, "unable to allocate descriptor pool\n"); > return -ENOMEM; > @@ -655,20 +809,16 @@ static size_t bcm2835_dma_desc_size(struct bcm2835_desc *d) > return d->size; > } > > -static size_t bcm2835_dma_desc_size_pos(struct bcm2835_desc *d, dma_addr_t addr) > +static size_t bcm2835_dma_desc_size_pos(const struct bcm2835_dma_cfg *cfg, > + struct bcm2835_desc *d, dma_addr_t addr) > { > unsigned int i; > size_t size; > > for (size = i = 0; i < d->frames; i++) { > - struct bcm2835_dma_cb *control_block = d->cb_list[i].cb; > - size_t this_size = control_block->length; > - dma_addr_t dma; > - > - if (d->dir == DMA_DEV_TO_MEM) > - dma = control_block->dst; > - else > - dma = control_block->src; > + struct bcm_dma_cb *control_block = d->cb_list[i].cb; > + size_t this_size = cfg->cb_get_length(control_block); > + dma_addr_t dma = cfg->cb_get_addr(control_block, d->dir); > > if (size) > size += this_size; > @@ -683,6 +833,7 @@ static enum dma_status bcm2835_dma_tx_status(struct dma_chan *chan, > dma_cookie_t cookie, > struct dma_tx_state *txstate) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > struct virt_dma_desc *vd; > enum dma_status ret; > @@ -701,14 +852,8 @@ static enum dma_status bcm2835_dma_tx_status(struct dma_chan *chan, > struct bcm2835_desc *d = c->desc; > dma_addr_t pos; > > - if (d->dir == DMA_MEM_TO_DEV) > - pos = readl(c->chan_base + BCM2835_DMA_SOURCE_AD); > - else if (d->dir == DMA_DEV_TO_MEM) > - pos = readl(c->chan_base + BCM2835_DMA_DEST_AD); > - else > - pos = 0; > - > - txstate->residue = bcm2835_dma_desc_size_pos(d, pos); > + pos = cfg->read_addr(c, d->dir); > + txstate->residue = bcm2835_dma_desc_size_pos(cfg, d, pos); > } else { > txstate->residue = 0; > } > @@ -761,6 +906,7 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_slave_sg( > enum dma_transfer_direction direction, > unsigned long flags, void *context) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > struct bcm2835_desc *d; > dma_addr_t src = 0, dst = 0; > @@ -775,11 +921,11 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_slave_sg( > if (direction == DMA_DEV_TO_MEM) { > if (c->cfg.src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) > return NULL; > - src = c->cfg.src_addr; > + src = cfg->addr_offset + c->cfg.src_addr; > } else { > if (c->cfg.dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) > return NULL; > - dst = c->cfg.dst_addr; > + dst = cfg->addr_offset + c->cfg.dst_addr; > } > > /* count frames in sg list */ > @@ -803,6 +949,7 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( > size_t period_len, enum dma_transfer_direction direction, > unsigned long flags) > { > + const struct bcm2835_dma_cfg *cfg = to_bcm2835_cfg(chan->device); > struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); > struct bcm2835_desc *d; > dma_addr_t src, dst; > @@ -836,12 +983,12 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( > if (direction == DMA_DEV_TO_MEM) { > if (c->cfg.src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) > return NULL; > - src = c->cfg.src_addr; > + src = cfg->addr_offset + c->cfg.src_addr; > dst = buf_addr; > } else { > if (c->cfg.dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) > return NULL; > - dst = c->cfg.dst_addr; > + dst = cfg->addr_offset + c->cfg.dst_addr; > src = buf_addr; > } > > @@ -862,7 +1009,8 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic( > return NULL; > > /* wrap around into a loop */ > - d->cb_list[d->frames - 1].cb->next = d->cb_list[0].paddr; > + cfg->cb_set_next(d->cb_list[d->frames - 1].cb, > + cfg->to_cb_addr(d->cb_list[0].paddr)); > > return vchan_tx_prep(&c->vc, &d->vd, flags); > } > @@ -923,10 +1071,7 @@ static int bcm2835_dma_chan_init(struct bcm2835_dmadev *d, int chan_id, > c->irq_number = irq; > c->irq_flags = irq_flags; > > - /* check in DEBUG register if this is a LITE channel */ > - if (readl(c->chan_base + BCM2835_DMA_DEBUG) & > - BCM2835_DMA_DEBUG_LITE) > - c->is_lite_channel = true; > + d->cfg->chan_plat_init(c); > > return 0; > } > @@ -945,8 +1090,40 @@ static void bcm2835_dma_free(struct bcm2835_dmadev *od) > DMA_TO_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); > } > > +static const struct bcm2835_dma_cfg bcm2835_data = { > + .addr_offset = 0, > + > + .cs_reg = BCM2835_DMA_CS, > + .cb_reg = BCM2835_DMA_ADDR, > + .next_reg = BCM2835_DMA_NEXTCB, > + .ti_reg = BCM2835_DMA_TI, > + > + .wait_mask = BCM2835_DMA_WAITING_FOR_WRITES, > + .reset_mask = BCM2835_DMA_RESET, > + .int_mask = BCM2835_DMA_INT, > + .active_mask = BCM2835_DMA_ACTIVE, > + .abort_mask = BCM2835_DMA_ABORT, > + .s_dreq_mask = BCM2835_DMA_S_DREQ, > + .d_dreq_mask = BCM2835_DMA_D_DREQ, > + > + .cb_get_length = bcm2835_dma_cb_get_length, > + .cb_get_addr = bcm2835_dma_cb_get_addr, > + .cb_init = bcm2835_dma_cb_init, > + .cb_set_src = bcm2835_dma_cb_set_src, > + .cb_set_dst = bcm2835_dma_cb_set_dst, > + .cb_set_next = bcm2835_dma_cb_set_next, > + .cb_set_length = bcm2835_dma_cb_set_length, > + .cb_append_extra = bcm2835_dma_cb_append_extra, > + > + .to_cb_addr = bcm2835_dma_to_cb_addr, > + > + .chan_plat_init = bcm2835_dma_chan_plat_init, > + .read_addr = bcm2835_dma_read_addr, > + .cs_flags = bcm2835_dma_cs_flags, > +}; > + > static const struct of_device_id bcm2835_dma_of_match[] = { > - { .compatible = "brcm,bcm2835-dma", }, > + { .compatible = "brcm,bcm2835-dma", .data = &bcm2835_data }, > {}, > }; > MODULE_DEVICE_TABLE(of, bcm2835_dma_of_match); > @@ -978,6 +1155,12 @@ static int bcm2835_dma_probe(struct platform_device *pdev) > u32 chans_available; > char chan_name[BCM2835_DMA_CHAN_NAME_SIZE]; > > + const void *cfg_data = device_get_match_data(&pdev->dev); > + if (!cfg_data) { > + dev_err(&pdev->dev, "Failed to match compatible string\n"); > + return -EINVAL; > + } > + > if (!pdev->dev.dma_mask) > pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; > > @@ -998,6 +1181,7 @@ static int bcm2835_dma_probe(struct platform_device *pdev) > return PTR_ERR(base); > > od->base = base; > + od->cfg = cfg_data; > > dma_cap_set(DMA_SLAVE, od->ddev.cap_mask); > dma_cap_set(DMA_PRIVATE, od->ddev.cap_mask); > -- > 2.35.3 > >