The set_sysclk() now is used to override the output bit clock rate.
But this is not a common way to implement a set_dai_sysclk(). And
this creates a problem when a general machine driver (simple-card
for example) tries to do set_dai_sysclk() by passing an input clock
rate for the baud clock instead of setting the bit clock rate as
fsl_ssi driver expected.
So this patch solves this problem by firstly removing set_sysclk()
since the hw_params() can calculate the bit clock rate. Secondly,
in order not to break those TDM use cases which previously might
have been using set_sysclk() to override the bit clock rate, this
patch changes the driver to override it based on the slot number.
The patch also removes an obsolete comment of the dir parameter.
Signed-off-by: Nicolin Chen <[email protected]>
---
sound/soc/fsl/fsl_ssi.c | 26 ++++++++------------------
1 file changed, 8 insertions(+), 18 deletions(-)
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
index 64598d1..3657c88 100644
--- a/sound/soc/fsl/fsl_ssi.c
+++ b/sound/soc/fsl/fsl_ssi.c
@@ -197,12 +197,12 @@ struct fsl_ssi_soc_data {
* @use_dma: DMA is used or FIQ with stream filter
* @use_dual_fifo: DMA with support for both FIFOs used
* @fifo_deph: Depth of the SSI FIFOs
+ * @slots: number of slots
* @rxtx_reg_val: Specific register settings for receive/transmit configuration
*
* @clk: SSI clock
* @baudclk: SSI baud clock for master mode
* @baudclk_streams: Active streams that are using baudclk
- * @bitclk_freq: bitclock frequency set by .set_dai_sysclk
*
* @dma_params_tx: DMA transmit parameters
* @dma_params_rx: DMA receive parameters
@@ -233,12 +233,12 @@ struct fsl_ssi_private {
bool use_dual_fifo;
bool has_ipg_clk_name;
unsigned int fifo_depth;
+ unsigned int slots;
struct fsl_ssi_rxtx_reg_val rxtx_reg_val;
struct clk *clk;
struct clk *baudclk;
unsigned int baudclk_streams;
- unsigned int bitclk_freq;
/* regcache for volatile regs */
u32 regcache_sfcsr;
@@ -700,8 +700,7 @@ static void fsl_ssi_shutdown(struct snd_pcm_substream *substream,
* Note: This function can be only called when using SSI as DAI master
*
* Quick instruction for parameters:
- * freq: Output BCLK frequency = samplerate * 32 (fixed) * channels
- * dir: SND_SOC_CLOCK_OUT -> TxBCLK, SND_SOC_CLOCK_IN -> RxBCLK.
+ * freq: Output BCLK frequency = samplerate * 32 (fixed) * slots (or channels)
*/
static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
struct snd_soc_dai *cpu_dai,
@@ -716,9 +715,9 @@ static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
unsigned int freq;
bool baudclk_is_used;
- /* Prefer the explicitly set bitclock frequency */
- if (ssi_private->bitclk_freq)
- freq = ssi_private->bitclk_freq;
+ /* Generate bit clock based on the slot or channel number */
+ if (ssi_private->slots)
+ freq = ssi_private->slots * 32 * params_rate(hw_params);
else
freq = params_channels(hw_params) * 32 * params_rate(hw_params);
@@ -805,16 +804,6 @@ static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
return 0;
}
-static int fsl_ssi_set_dai_sysclk(struct snd_soc_dai *cpu_dai,
- int clk_id, unsigned int freq, int dir)
-{
- struct fsl_ssi_private *ssi_private = snd_soc_dai_get_drvdata(cpu_dai);
-
- ssi_private->bitclk_freq = freq;
-
- return 0;
-}
-
/**
* fsl_ssi_hw_params - program the sample size
*
@@ -1121,6 +1110,8 @@ static int fsl_ssi_set_dai_tdm_slot(struct snd_soc_dai *cpu_dai, u32 tx_mask,
regmap_update_bits(regs, CCSR_SSI_SCR, CCSR_SSI_SCR_SSIEN, val);
+ ssi_private->slots = slots;
+
return 0;
}
@@ -1191,7 +1182,6 @@ static const struct snd_soc_dai_ops fsl_ssi_dai_ops = {
.hw_params = fsl_ssi_hw_params,
.hw_free = fsl_ssi_hw_free,
.set_fmt = fsl_ssi_set_dai_fmt,
- .set_sysclk = fsl_ssi_set_dai_sysclk,
.set_tdm_slot = fsl_ssi_set_dai_tdm_slot,
.trigger = fsl_ssi_trigger,
};
--
2.7.4
On Thu, Sep 07, 2017 at 10:23:43PM -0700, Nicolin Chen wrote:
> The set_sysclk() now is used to override the output bit clock rate.
> But this is not a common way to implement a set_dai_sysclk(). And
> this creates a problem when a general machine driver (simple-card
> for example) tries to do set_dai_sysclk() by passing an input clock
> rate for the baud clock instead of setting the bit clock rate as
> fsl_ssi driver expected.
>
> So this patch solves this problem by firstly removing set_sysclk()
> since the hw_params() can calculate the bit clock rate. Secondly,
> in order not to break those TDM use cases which previously might
> have been using set_sysclk() to override the bit clock rate, this
> patch changes the driver to override it based on the slot number.
>
> The patch also removes an obsolete comment of the dir parameter.
>
> Signed-off-by: Nicolin Chen <[email protected]>
Forgot to mention, I think that it's better to wait for a couple of
Tested-by from those who use the TDM mode of SSI before applying it.
Thanks
Nicolin
On 08/09/2017 07:42, Nicolin Chen wrote:
> On Thu, Sep 07, 2017 at 10:23:43PM -0700, Nicolin Chen wrote:
>> The set_sysclk() now is used to override the output bit clock rate.
>> But this is not a common way to implement a set_dai_sysclk(). And
>> this creates a problem when a general machine driver (simple-card
>> for example) tries to do set_dai_sysclk() by passing an input clock
>> rate for the baud clock instead of setting the bit clock rate as
>> fsl_ssi driver expected.
>>
>> So this patch solves this problem by firstly removing set_sysclk()
>> since the hw_params() can calculate the bit clock rate. Secondly,
>> in order not to break those TDM use cases which previously might
>> have been using set_sysclk() to override the bit clock rate, this
>> patch changes the driver to override it based on the slot number.
>>
>> The patch also removes an obsolete comment of the dir parameter.
>>
>> Signed-off-by: Nicolin Chen <[email protected]>
> Forgot to mention, I think that it's better to wait for a couple of
> Tested-by from those who use the TDM mode of SSI before applying it.
I can check next monday or tuesday.
Arnaud
> Thanks
> Nicolin
Hi Nicolin,
> On Thu, Sep 07, 2017 at 10:23:43PM -0700, Nicolin Chen wrote:
>> The set_sysclk() now is used to override the output bit clock rate.
>> But this is not a common way to implement a set_dai_sysclk(). And
>> this creates a problem when a general machine driver (simple-card
>> for example) tries to do set_dai_sysclk() by passing an input clock
>> rate for the baud clock instead of setting the bit clock rate as
>> fsl_ssi driver expected.
>>
>> So this patch solves this problem by firstly removing set_sysclk()
>> since the hw_params() can calculate the bit clock rate. Secondly,
>> in order not to break those TDM use cases which previously might
>> have been using set_sysclk() to override the bit clock rate, this
>> patch changes the driver to override it based on the slot number.
>>
>> The patch also removes an obsolete comment of the dir parameter.
>>
>> Signed-off-by: Nicolin Chen <[email protected]>
>
> Forgot to mention, I think that it's better to wait for a couple of
> Tested-by from those who use the TDM mode of SSI before applying it.
Although, I'm not the PCM user, I've tested your patch and it works :-)
Tested-by: Ćukasz Majewski <[email protected]>
Test HW: imx6q + tfa9879 codec
>
> Thanks
> Nicolin
>
--
Best regards,
Lukasz Majewski
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: [email protected]
Hello Nicolin
On 08/09/2017 07:23, Nicolin Chen wrote:
> The set_sysclk() now is used to override the output bit clock rate.
> But this is not a common way to implement a set_dai_sysclk(). And
> this creates a problem when a general machine driver (simple-card
> for example) tries to do set_dai_sysclk() by passing an input clock
> rate for the baud clock instead of setting the bit clock rate as
> fsl_ssi driver expected.
>
> So this patch solves this problem by firstly removing set_sysclk()
> since the hw_params() can calculate the bit clock rate. Secondly,
> in order not to break those TDM use cases which previously might
> have been using set_sysclk() to override the bit clock rate, this
> patch changes the driver to override it based on the slot number.
>
> The patch also removes an obsolete comment of the dir parameter.
>
> Signed-off-by: Nicolin Chen <[email protected]>
> ---
> sound/soc/fsl/fsl_ssi.c | 26 ++++++++------------------
> 1 file changed, 8 insertions(+), 18 deletions(-)
>
> diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
> index 64598d1..3657c88 100644
> --- a/sound/soc/fsl/fsl_ssi.c
> +++ b/sound/soc/fsl/fsl_ssi.c
> @@ -197,12 +197,12 @@ struct fsl_ssi_soc_data {
> * @use_dma: DMA is used or FIQ with stream filter
> * @use_dual_fifo: DMA with support for both FIFOs used
> * @fifo_deph: Depth of the SSI FIFOs
> + * @slots: number of slots
> * @rxtx_reg_val: Specific register settings for receive/transmit configuration
> *
> * @clk: SSI clock
> * @baudclk: SSI baud clock for master mode
> * @baudclk_streams: Active streams that are using baudclk
> - * @bitclk_freq: bitclock frequency set by .set_dai_sysclk
> *
> * @dma_params_tx: DMA transmit parameters
> * @dma_params_rx: DMA receive parameters
> @@ -233,12 +233,12 @@ struct fsl_ssi_private {
> bool use_dual_fifo;
> bool has_ipg_clk_name;
> unsigned int fifo_depth;
> + unsigned int slots;
> struct fsl_ssi_rxtx_reg_val rxtx_reg_val;
>
> struct clk *clk;
> struct clk *baudclk;
> unsigned int baudclk_streams;
> - unsigned int bitclk_freq;
>
> /* regcache for volatile regs */
> u32 regcache_sfcsr;
> @@ -700,8 +700,7 @@ static void fsl_ssi_shutdown(struct snd_pcm_substream *substream,
> * Note: This function can be only called when using SSI as DAI master
> *
> * Quick instruction for parameters:
> - * freq: Output BCLK frequency = samplerate * 32 (fixed) * channels
> - * dir: SND_SOC_CLOCK_OUT -> TxBCLK, SND_SOC_CLOCK_IN -> RxBCLK.
> + * freq: Output BCLK frequency = samplerate * 32 (fixed) * slots (or channels)
Slots are not necessarily 32 bits width.
Indeed, a simple grep of snd_soc_dai_set_tdm_slot shows 16, 24, 32 and 0
bits usage. (don't know the real meaning of 0 BTW...)
So, it should be good to also remember the slots width given in
snd_soc_dai_set_tdm_slot() call.
Anyway, there is something wrong in the snd_soc_codec_set_sysclk usage
by fsl_ssi
Let's get a look again at the description:
/**
* snd_soc_dai_set_sysclk - configure DAI system or master clock.
* @dai: DAI
* @clk_id: DAI specific clock ID
* @freq: new clock frequency in Hz
* @dir: new clock direction - input/output.
*
* Configures the DAI master (MCLK) or system (SYSCLK) clocking.
*/
int snd_soc_dai_set_sysclk(struct snd_soc_dai *dai, int clk_id,
unsigned int freq, int dir)
So, it can be used to configure 2 different clocks (and more) with
different usages.
Lukasz complains that simple-card is configuring MCLK incorrectly. but
simple-card, only want to configure the SYSCLK, whereas fsl_ssi
understand "configure the MCLK..." (fsl_ssi doesn't check the clk_id at all)
I think the problem is here.
I would propose a new clk_id
#define FSL_SSI_SYSCLK_MCLK 1
And leave fsl_ssi_set_dai_sysclk(xxx, FSL_SSI_SYSCLK_MCLK, ...) override
the computed mlck.
This will leave a way for obscure TDM users to specify a specific a
random mclk frequency for obscure reasons...
Unfortunately, such generic clock_id (sysclk, mclk) were never defined
widely.
Is it really needed ? It is true that, up to now, we are using
fsl_ssi_set_dai_sysclk() in addition to snd_soc_dai_set_tdm_slot() only
because this was the only way to deal with correct mclk setting. And if
now, snd_soc_dai_set_tdm_slot() do its job correctly,
fsl_ssi_set_dai_sysclk() will become useless (except for obscure TDM
users of course)
Will it break TDM users ?
I will say yes, surely, but TDM users (at least myself) may consider
something will break in TDM audio area every-time we jump to a new linux
release...
Arnaud
> */
> static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
> struct snd_soc_dai *cpu_dai,
> @@ -716,9 +715,9 @@ static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
> unsigned int freq;
> bool baudclk_is_used;
>
> - /* Prefer the explicitly set bitclock frequency */
> - if (ssi_private->bitclk_freq)
> - freq = ssi_private->bitclk_freq;
> + /* Generate bit clock based on the slot or channel number */
> + if (ssi_private->slots)
> + freq = ssi_private->slots * 32 * params_rate(hw_params);
> else
> freq = params_channels(hw_params) * 32 * params_rate(hw_params);
>
> @@ -805,16 +804,6 @@ static int fsl_ssi_set_bclk(struct snd_pcm_substream *substream,
> return 0;
> }
>
> -static int fsl_ssi_set_dai_sysclk(struct snd_soc_dai *cpu_dai,
> - int clk_id, unsigned int freq, int dir)
> -{
> - struct fsl_ssi_private *ssi_private = snd_soc_dai_get_drvdata(cpu_dai);
> -
> - ssi_private->bitclk_freq = freq;
> -
> - return 0;
> -}
> -
> /**
> * fsl_ssi_hw_params - program the sample size
> *
> @@ -1121,6 +1110,8 @@ static int fsl_ssi_set_dai_tdm_slot(struct snd_soc_dai *cpu_dai, u32 tx_mask,
>
> regmap_update_bits(regs, CCSR_SSI_SCR, CCSR_SSI_SCR_SSIEN, val);
>
> + ssi_private->slots = slots;
> +
> return 0;
> }
>
> @@ -1191,7 +1182,6 @@ static const struct snd_soc_dai_ops fsl_ssi_dai_ops = {
> .hw_params = fsl_ssi_hw_params,
> .hw_free = fsl_ssi_hw_free,
> .set_fmt = fsl_ssi_set_dai_fmt,
> - .set_sysclk = fsl_ssi_set_dai_sysclk,
> .set_tdm_slot = fsl_ssi_set_dai_tdm_slot,
> .trigger = fsl_ssi_trigger,
> };
On Tue, Sep 12, 2017 at 04:35:13PM +0200, Arnaud Mouiche wrote:
> >- * freq: Output BCLK frequency = samplerate * 32 (fixed) * channels
> >- * dir: SND_SOC_CLOCK_OUT -> TxBCLK, SND_SOC_CLOCK_IN -> RxBCLK.
> >+ * freq: Output BCLK frequency = samplerate * 32 (fixed) * slots (or channels)
>
> Slots are not necessarily 32 bits width.
> Indeed, a simple grep of snd_soc_dai_set_tdm_slot shows 16, 24, 32
> and 0 bits usage. (don't know the real meaning of 0 BTW...)
> So, it should be good to also remember the slots width given in
> snd_soc_dai_set_tdm_slot() call.
RM says that the word length is fixed to 32 in I2S Master mode.
So I used it here. But I think using it with the slots could be
wrong here as you stated. What kind of clock rates (bit and lr)
does a TDM case have?
The problem of slot width here is handled in the set_tdm_slot()
at all. So I tried to ignored that. But we probably do need it
when calculating things with the slot number.
Could you please give me a few set of examples of how you set
set_sysclk(), set_tdm_slot() with the current driver? The idea
here is to figure out a way to calculate the bclk in hw_params
without getting set_sysclk() involved any more.
> Anyway, there is something wrong in the snd_soc_codec_set_sysclk
> usage by fsl_ssi
> Let's get a look again at the description:
>
> /**
> * snd_soc_dai_set_sysclk - configure DAI system or master clock.
> * @dai: DAI
> * @clk_id: DAI specific clock ID
> * @freq: new clock frequency in Hz
> * @dir: new clock direction - input/output.
> *
> * Configures the DAI master (MCLK) or system (SYSCLK) clocking.
> */
> int snd_soc_dai_set_sysclk(struct snd_soc_dai *dai, int clk_id,
> unsigned int freq, int dir)
>
> So, it can be used to configure 2 different clocks (and more) with
> different usages.
>
> Lukasz complains that simple-card is configuring MCLK incorrectly.
> but simple-card, only want to configure the SYSCLK, whereas fsl_ssi
> understand "configure the MCLK..." (fsl_ssi doesn't check the clk_id
> at all)
It's more than a clock_id in my opinion. The driver now sets
the bit clock rate via set_sysclk() other than the MCLK rate
actually.
> I think the problem is here.
> I would propose a new clk_id
>
> #define FSL_SSI_SYSCLK_MCLK 1
>
> And leave fsl_ssi_set_dai_sysclk(xxx, FSL_SSI_SYSCLK_MCLK, ...)
> override the computed mlck.
> This will leave a way for obscure TDM users to specify a specific a
> random mclk frequency for obscure reasons...
> Unfortunately, such generic clock_id (sysclk, mclk) were never
> defined widely.
Unfortunately, it looks like a work around to me. I understand
the idea of leaving set_sysclk() out there to override the bit
clock is convenient, but it is not a standard ALSA design and
may eventually introduce new problems like today.
> Is it really needed ? It is true that, up to now, we are using
> fsl_ssi_set_dai_sysclk() in addition to snd_soc_dai_set_tdm_slot()
> only because this was the only way to deal with correct mclk
> setting. And if now, snd_soc_dai_set_tdm_slot() do its job
> correctly, fsl_ssi_set_dai_sysclk() will become useless (except for
> obscure TDM users of course)
The idea here is to drop the set_sysclk() for all user cases
including TDM cases. So we need a solid patch to calculate the
bit clock rate in hw_params() for TDM user cases..
On 12/09/2017 23:32, Nicolin Chen wrote:
> On Tue, Sep 12, 2017 at 04:35:13PM +0200, Arnaud Mouiche wrote:
>
>>> - * freq: Output BCLK frequency = samplerate * 32 (fixed) * channels
>>> - * dir: SND_SOC_CLOCK_OUT -> TxBCLK, SND_SOC_CLOCK_IN -> RxBCLK.
>>> + * freq: Output BCLK frequency = samplerate * 32 (fixed) * slots (or channels)
>> Slots are not necessarily 32 bits width.
>> Indeed, a simple grep of snd_soc_dai_set_tdm_slot shows 16, 24, 32
>> and 0 bits usage. (don't know the real meaning of 0 BTW...)
>> So, it should be good to also remember the slots width given in
>> snd_soc_dai_set_tdm_slot() call.
> RM says that the word length is fixed to 32 in I2S Master mode.
> So I used it here. But I think using it with the slots could be
> wrong here as you stated. What kind of clock rates (bit and lr)
> does a TDM case have?
>
> The problem of slot width here is handled in the set_tdm_slot()
> at all. So I tried to ignored that. But we probably do need it
> when calculating things with the slot number.
>
> Could you please give me a few set of examples of how you set
> set_sysclk(), set_tdm_slot() with the current driver? The idea
> here is to figure out a way to calculate the bclk in hw_params
> without getting set_sysclk() involved any more.
Here is one, where a bclk = 4*16*fs is expected
static int imx_hifi_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params)
{
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct imx_priv *priv = &card_priv;
struct device *dev = &priv->pdev->dev;
int ret = 0;
struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
unsigned int freq;
int ch;
/* configuring cpu_dai
* - bitclk (fclk is computed automatically)
* 16bit * 4 (max) channels * sampling rate
* - tdm maskto select the active channels
*/
freq = 4 * 16 * params_rate(params);
if (freq != priv->current_freq) {
/* clk_id and direction are not taken in count by fsl_ssi
driver */
ret = snd_soc_dai_set_sysclk( cpu_dai, 0, freq, 0 );
if (ret) {
dev_err(dev, "failed to set cpu dai bitclk: %u\n", freq);
return ret;
}
priv->current_freq = freq;
}
ch = params_channels(params);
if (ch != priv->current_ch) {
ret = snd_soc_dai_set_tdm_slot( cpu_dai, (1 << ch)-1, (1 <<
ch)-1, 4, 16 );
if (ret) {
dev_err(dev, "failed to set cpu dai tdm slots:
ch=%d\n", ch);
return ret;
}
priv->current_ch = ch;
}
return 0;
}
In another setup, there are 8 x 16 bits slots, whatever the number of
active channels is.
In this case bclk = 128 * fs
The number of slots is completely arbitrary. Some slots can even be
reserved for communication between codecs that don't communicate with linux.
>
>> Anyway, there is something wrong in the snd_soc_codec_set_sysclk
>> usage by fsl_ssi
>> Let's get a look again at the description:
>>
>> /**
>> * snd_soc_dai_set_sysclk - configure DAI system or master clock.
>> * @dai: DAI
>> * @clk_id: DAI specific clock ID
>> * @freq: new clock frequency in Hz
>> * @dir: new clock direction - input/output.
>> *
>> * Configures the DAI master (MCLK) or system (SYSCLK) clocking.
>> */
>> int snd_soc_dai_set_sysclk(struct snd_soc_dai *dai, int clk_id,
>> unsigned int freq, int dir)
>>
>> So, it can be used to configure 2 different clocks (and more) with
>> different usages.
>>
>> Lukasz complains that simple-card is configuring MCLK incorrectly.
>> but simple-card, only want to configure the SYSCLK, whereas fsl_ssi
>> understand "configure the MCLK..." (fsl_ssi doesn't check the clk_id
>> at all)
> It's more than a clock_id in my opinion. The driver now sets
> the bit clock rate via set_sysclk() other than the MCLK rate
> actually.
>
>> I think the problem is here.
>> I would propose a new clk_id
>>
>> #define FSL_SSI_SYSCLK_MCLK 1
>>
>> And leave fsl_ssi_set_dai_sysclk(xxx, FSL_SSI_SYSCLK_MCLK, ...)
>> override the computed mlck.
>> This will leave a way for obscure TDM users to specify a specific a
>> random mclk frequency for obscure reasons...
>> Unfortunately, such generic clock_id (sysclk, mclk) were never
>> defined widely.
> Unfortunately, it looks like a work around to me. I understand
> the idea of leaving set_sysclk() out there to override the bit
> clock is convenient, but it is not a standard ALSA design and
> may eventually introduce new problems like today.
I agree. I'm not conservative at all concerning this question.
I don't see a way to remove set_sysclk without breaking current TDM
users anyway, at least for those who don't have their code upstreamed.
All information provided through snd_soc_dai_set_tdm_slot( cpu_dai,
mask, mask, slots, width ) should be enough
In this case, for TDM users
bclk = slots * width * fs (where slots is != channels)
will manage 99 % of the cases.
And the remaining 1% will concern people who need to hack the kernel so
widely they don't care about the set_sysclk removal.
Now, looking at the code currently in linus/master:sound/soc/fsl
concerning TDM
- imx-mc13783.c : the codec is master => OK
- fsl-asoc-card.c : *something will break since snd_soc_dai_set_sysclk
returned code is checked*
- eukrea-tlv320.c : based on imx-ssi.c, so not affected by changes in
fsl_ssi.c. Use set_sysclk() only to setup the clock direction
- wm1133-ev1.c : bclk generated by the codec. set_sysclk() not called
for cpu_dai
Consequently, we can say that few things is broken in linux upstream if
set_sysclk is removed, and this can be fixed easily for fsl-asoc-card.c
Arnaud
On Wed, Sep 13, 2017 at 10:02:20AM +0200, Arnaud Mouiche wrote:
> >Could you please give me a few set of examples of how you set
> >set_sysclk(), set_tdm_slot() with the current driver? The idea
> >here is to figure out a way to calculate the bclk in hw_params
> >without getting set_sysclk() involved any more.
> Here is one, where a bclk = 4*16*fs is expected
> In another setup, there are 8 x 16 bits slots, whatever the number
> of active channels is.
> In this case bclk = 128 * fs
> The number of slots is completely arbitrary. Some slots can even be
> reserved for communication between codecs that don't communicate
> with linux.
In summary, bclk = sample rate * slots * slot_width;
I will update my patch soon.
> >Unfortunately, it looks like a work around to me. I understand
> >the idea of leaving set_sysclk() out there to override the bit
> >clock is convenient, but it is not a standard ALSA design and
> >may eventually introduce new problems like today.
>
> I agree. I'm not conservative at all concerning this question.
> I don't see a way to remove set_sysclk without breaking current TDM
> users anyway, at least for those who don't have their code
> upstreamed.
Which TDM case would be broken by this removal? The only impact
that I can see is that the ASoC core returns an ENOTSUPP for a
set_sysclk() call now, which is something that a dai-link driver
should have taken care of anyway.
> All information provided through snd_soc_dai_set_tdm_slot( cpu_dai,
> mask, mask, slots, width ) should be enough
> In this case, for TDM users
>
> bclk = slots * width * fs (where slots is != channels)
> will manage 99 % of the cases.
> And the remaining 1% will concern people who need to hack the kernel
> so widely they don't care about the set_sysclk removal.
A patch from those people will be always welcome.
> - fsl-asoc-card.c : *something will break since
> snd_soc_dai_set_sysclk returned code is checked*
I've already submitted a patch to ignore all ENOTSUPP.
Hi Nicolin,
> On Sat, Nov 25, 2017 at 11:29:48PM +0100, Lukasz Majewski wrote:
>
> > Nicolin, do you know what happened with this patch? I couldn't find
> > it in current linux/master.
> >
> > Has it been applied to any asoc tree for being upstreamed?
>
> A similar patch with an updated subject got applied:
> https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20171124&id=b0a7043d5c2ccdd306959f295bf1a62be025cbf5
Thanks for update - I was not aware of it.
Tested-by: Lukasz Majewski <[email protected]>
HW: iMX6Q.
Best regards,
Lukasz Majewski
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: [email protected]
On Sat, Nov 25, 2017 at 11:29:48PM +0100, Lukasz Majewski wrote:
> Nicolin, do you know what happened with this patch? I couldn't find it
> in current linux/master.
>
> Has it been applied to any asoc tree for being upstreamed?
A similar patch with an updated subject got applied:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?h=next-20171124&id=b0a7043d5c2ccdd306959f295bf1a62be025cbf5
From 1585078932887330846@xxx Sat Nov 25 22:31:07 +0000 2017
X-GM-THRID: 1577948615970036430
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread
Hi Nicolin,
> On Wed, Sep 13, 2017 at 10:02:20AM +0200, Arnaud Mouiche wrote:
>
> > >Could you please give me a few set of examples of how you set
> > >set_sysclk(), set_tdm_slot() with the current driver? The idea
> > >here is to figure out a way to calculate the bclk in hw_params
> > >without getting set_sysclk() involved any more.
>
> > Here is one, where a bclk = 4*16*fs is expected
>
> > In another setup, there are 8 x 16 bits slots, whatever the number
> > of active channels is.
> > In this case bclk = 128 * fs
> > The number of slots is completely arbitrary. Some slots can even be
> > reserved for communication between codecs that don't communicate
> > with linux.
>
> In summary, bclk = sample rate * slots * slot_width;
>
> I will update my patch soon.
>
> > >Unfortunately, it looks like a work around to me. I understand
> > >the idea of leaving set_sysclk() out there to override the bit
> > >clock is convenient, but it is not a standard ALSA design and
> > >may eventually introduce new problems like today.
> >
> > I agree. I'm not conservative at all concerning this question.
> > I don't see a way to remove set_sysclk without breaking current TDM
> > users anyway, at least for those who don't have their code
> > upstreamed.
>
> Which TDM case would be broken by this removal? The only impact
> that I can see is that the ASoC core returns an ENOTSUPP for a
> set_sysclk() call now, which is something that a dai-link driver
> should have taken care of anyway.
>
> > All information provided through snd_soc_dai_set_tdm_slot( cpu_dai,
> > mask, mask, slots, width ) should be enough
> > In this case, for TDM users
> >
> > bclk = slots * width * fs (where slots is != channels)
>
> > will manage 99 % of the cases.
> > And the remaining 1% will concern people who need to hack the kernel
> > so widely they don't care about the set_sysclk removal.
>
> A patch from those people will be always welcome.
>
> > - fsl-asoc-card.c : *something will break since
> > snd_soc_dai_set_sysclk returned code is checked*
>
> I've already submitted a patch to ignore all ENOTSUPP.
Nicolin, do you know what happened with this patch? I couldn't find it
in current linux/master.
Has it been applied to any asoc tree for being upstreamed?
Best regards,
Lukasz Majewski
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: [email protected]