From: Tom Rix <[email protected]>
Remove the second 'the'
Replacements
completetion to completion
seens to seen
pendling to pending
atleast to at least
tranfer to transfer
multibple to a multiple
transfering to transferring
Signed-off-by: Tom Rix <[email protected]>
---
drivers/dma/ti/cppi41.c | 6 +++---
drivers/dma/ti/edma.c | 10 +++++-----
drivers/dma/ti/omap-dma.c | 2 +-
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/dma/ti/cppi41.c b/drivers/dma/ti/cppi41.c
index 8c2f7ebe998c..062bd9bd4de0 100644
--- a/drivers/dma/ti/cppi41.c
+++ b/drivers/dma/ti/cppi41.c
@@ -315,7 +315,7 @@ static irqreturn_t cppi41_irq(int irq, void *data)
val = cppi_readl(cdd->qmgr_mem + QMGR_PEND(i));
if (i == QMGR_PENDING_SLOT_Q(first_completion_queue) && val) {
u32 mask;
- /* set corresponding bit for completetion Q 93 */
+ /* set corresponding bit for completion Q 93 */
mask = 1 << QMGR_PENDING_BIT_Q(first_completion_queue);
/* not set all bits for queues less than Q 93 */
mask--;
@@ -703,7 +703,7 @@ static int cppi41_tear_down_chan(struct cppi41_channel *c)
* transfer descriptor followed by TD descriptor. Waiting seems not to
* cause any difference.
* RX seems to be thrown out right away. However once the TearDown
- * descriptor gets through we are done. If we have seens the transfer
+ * descriptor gets through we are done. If we have seen the transfer
* descriptor before the TD we fetch it from enqueue, it has to be
* there waiting for us.
*/
@@ -747,7 +747,7 @@ static int cppi41_stop_chan(struct dma_chan *chan)
struct cppi41_channel *cc, *_ct;
/*
- * channels might still be in the pendling list if
+ * channels might still be in the pending list if
* cppi41_dma_issue_pending() is called after
* cppi41_runtime_suspend() is called
*/
diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
index 08e47f44d325..3ea8ef7f57df 100644
--- a/drivers/dma/ti/edma.c
+++ b/drivers/dma/ti/edma.c
@@ -118,10 +118,10 @@
/*
* Max of 20 segments per channel to conserve PaRAM slots
- * Also note that MAX_NR_SG should be atleast the no.of periods
+ * Also note that MAX_NR_SG should be at least the no.of periods
* that are required for ASoC, otherwise DMA prep calls will
* fail. Today davinci-pcm is the only user of this driver and
- * requires atleast 17 slots, so we setup the default to 20.
+ * requires at least 17 slots, so we setup the default to 20.
*/
#define MAX_NR_SG 20
#define EDMA_MAX_SLOTS MAX_NR_SG
@@ -976,7 +976,7 @@ static int edma_config_pset(struct dma_chan *chan, struct edma_pset *epset,
* and quotient respectively of the division of:
* (dma_length / acnt) by (SZ_64K -1). This is so
* that in case bcnt over flows, we have ccnt to use.
- * Note: In A-sync tranfer only, bcntrld is used, but it
+ * Note: In A-sync transfer only, bcntrld is used, but it
* only applies for sg_dma_len(sg) >= SZ_64K.
* In this case, the best way adopted is- bccnt for the
* first frame will be the remainder below. Then for
@@ -1203,7 +1203,7 @@ static struct dma_async_tx_descriptor *edma_prep_dma_memcpy(
* slot2: the remaining amount of data after slot1.
* ACNT = full_length - length1, length2 = ACNT
*
- * When the full_length is multibple of 32767 one slot can be
+ * When the full_length is a multiple of 32767 one slot can be
* used to complete the transfer.
*/
width = array_size;
@@ -1814,7 +1814,7 @@ static void edma_issue_pending(struct dma_chan *chan)
* This limit exists to avoid a possible infinite loop when waiting for proof
* that a particular transfer is completed. This limit can be hit if there
* are large bursts to/from slow devices or the CPU is never able to catch
- * the DMA hardware idle. On an AM335x transfering 48 bytes from the UART
+ * the DMA hardware idle. On an AM335x transferring 48 bytes from the UART
* RX-FIFO, as many as 55 loops have been seen.
*/
#define EDMA_MAX_TR_WAIT_LOOPS 1000
diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c
index 7cb577e6587b..8e52a0dc1f78 100644
--- a/drivers/dma/ti/omap-dma.c
+++ b/drivers/dma/ti/omap-dma.c
@@ -1442,7 +1442,7 @@ static int omap_dma_pause(struct dma_chan *chan)
* A source-synchronised channel is one where the fetching of data is
* under control of the device. In other words, a device-to-memory
* transfer. So, a destination-synchronised channel (which would be a
- * memory-to-device transfer) undergoes an abort if the the CCR_ENABLE
+ * memory-to-device transfer) undergoes an abort if the CCR_ENABLE
* bit is cleared.
* From 16.1.4.20.4.6.2 Abort: "If an abort trigger occurs, the channel
* aborts immediately after completion of current read/write
--
2.26.3
On 17/02/2022 20:25, [email protected] wrote:
> From: Tom Rix <[email protected]>
>
> Remove the second 'the'
>
> Replacements
> completetion to completion
> seens to seen
> pendling to pending
> atleast to at least
> tranfer to transfer
> multibple to a multiple
> transfering to transferring
Acked-by: Peter Ujfalusi <[email protected]>
>
> Signed-off-by: Tom Rix <[email protected]>
> ---
> drivers/dma/ti/cppi41.c | 6 +++---
> drivers/dma/ti/edma.c | 10 +++++-----
> drivers/dma/ti/omap-dma.c | 2 +-
> 3 files changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/dma/ti/cppi41.c b/drivers/dma/ti/cppi41.c
> index 8c2f7ebe998c..062bd9bd4de0 100644
> --- a/drivers/dma/ti/cppi41.c
> +++ b/drivers/dma/ti/cppi41.c
> @@ -315,7 +315,7 @@ static irqreturn_t cppi41_irq(int irq, void *data)
> val = cppi_readl(cdd->qmgr_mem + QMGR_PEND(i));
> if (i == QMGR_PENDING_SLOT_Q(first_completion_queue) && val) {
> u32 mask;
> - /* set corresponding bit for completetion Q 93 */
> + /* set corresponding bit for completion Q 93 */
> mask = 1 << QMGR_PENDING_BIT_Q(first_completion_queue);
> /* not set all bits for queues less than Q 93 */
> mask--;
> @@ -703,7 +703,7 @@ static int cppi41_tear_down_chan(struct cppi41_channel *c)
> * transfer descriptor followed by TD descriptor. Waiting seems not to
> * cause any difference.
> * RX seems to be thrown out right away. However once the TearDown
> - * descriptor gets through we are done. If we have seens the transfer
> + * descriptor gets through we are done. If we have seen the transfer
> * descriptor before the TD we fetch it from enqueue, it has to be
> * there waiting for us.
> */
> @@ -747,7 +747,7 @@ static int cppi41_stop_chan(struct dma_chan *chan)
> struct cppi41_channel *cc, *_ct;
>
> /*
> - * channels might still be in the pendling list if
> + * channels might still be in the pending list if
> * cppi41_dma_issue_pending() is called after
> * cppi41_runtime_suspend() is called
> */
> diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
> index 08e47f44d325..3ea8ef7f57df 100644
> --- a/drivers/dma/ti/edma.c
> +++ b/drivers/dma/ti/edma.c
> @@ -118,10 +118,10 @@
>
> /*
> * Max of 20 segments per channel to conserve PaRAM slots
> - * Also note that MAX_NR_SG should be atleast the no.of periods
> + * Also note that MAX_NR_SG should be at least the no.of periods
> * that are required for ASoC, otherwise DMA prep calls will
> * fail. Today davinci-pcm is the only user of this driver and
> - * requires atleast 17 slots, so we setup the default to 20.
> + * requires at least 17 slots, so we setup the default to 20.
> */
> #define MAX_NR_SG 20
> #define EDMA_MAX_SLOTS MAX_NR_SG
> @@ -976,7 +976,7 @@ static int edma_config_pset(struct dma_chan *chan, struct edma_pset *epset,
> * and quotient respectively of the division of:
> * (dma_length / acnt) by (SZ_64K -1). This is so
> * that in case bcnt over flows, we have ccnt to use.
> - * Note: In A-sync tranfer only, bcntrld is used, but it
> + * Note: In A-sync transfer only, bcntrld is used, but it
> * only applies for sg_dma_len(sg) >= SZ_64K.
> * In this case, the best way adopted is- bccnt for the
> * first frame will be the remainder below. Then for
> @@ -1203,7 +1203,7 @@ static struct dma_async_tx_descriptor *edma_prep_dma_memcpy(
> * slot2: the remaining amount of data after slot1.
> * ACNT = full_length - length1, length2 = ACNT
> *
> - * When the full_length is multibple of 32767 one slot can be
> + * When the full_length is a multiple of 32767 one slot can be
> * used to complete the transfer.
> */
> width = array_size;
> @@ -1814,7 +1814,7 @@ static void edma_issue_pending(struct dma_chan *chan)
> * This limit exists to avoid a possible infinite loop when waiting for proof
> * that a particular transfer is completed. This limit can be hit if there
> * are large bursts to/from slow devices or the CPU is never able to catch
> - * the DMA hardware idle. On an AM335x transfering 48 bytes from the UART
> + * the DMA hardware idle. On an AM335x transferring 48 bytes from the UART
> * RX-FIFO, as many as 55 loops have been seen.
> */
> #define EDMA_MAX_TR_WAIT_LOOPS 1000
> diff --git a/drivers/dma/ti/omap-dma.c b/drivers/dma/ti/omap-dma.c
> index 7cb577e6587b..8e52a0dc1f78 100644
> --- a/drivers/dma/ti/omap-dma.c
> +++ b/drivers/dma/ti/omap-dma.c
> @@ -1442,7 +1442,7 @@ static int omap_dma_pause(struct dma_chan *chan)
> * A source-synchronised channel is one where the fetching of data is
> * under control of the device. In other words, a device-to-memory
> * transfer. So, a destination-synchronised channel (which would be a
> - * memory-to-device transfer) undergoes an abort if the the CCR_ENABLE
> + * memory-to-device transfer) undergoes an abort if the CCR_ENABLE
> * bit is cleared.
> * From 16.1.4.20.4.6.2 Abort: "If an abort trigger occurs, the channel
> * aborts immediately after completion of current read/write
--
Péter
On 17-02-22, 10:25, [email protected] wrote:
> From: Tom Rix <[email protected]>
>
> Remove the second 'the'
>
> Replacements
> completetion to completion
> seens to seen
> pendling to pending
> atleast to at least
> tranfer to transfer
> multibple to a multiple
> transfering to transferring
Applied, thanks
--
~Vinod