Received: by 2002:a05:7412:f690:b0:e2:908c:2ebd with SMTP id ej16csp890489rdb; Fri, 20 Oct 2023 02:24:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH03OPH59JlyvT9ZdwNSCz5Mjp0b1gefP+GqTgJz5Hj0UdSXeJ6d0xlz4fzK5CJSjYBhFBK X-Received: by 2002:a05:6a20:b2a7:b0:16b:f3b1:c03d with SMTP id ei39-20020a056a20b2a700b0016bf3b1c03dmr1247416pzb.36.1697793849251; Fri, 20 Oct 2023 02:24:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697793849; cv=none; d=google.com; s=arc-20160816; b=zotaZMRnoHcI7yanlHnnVLEGPL8ke1kkCUTKrowfSyZQaNvkJaMZgEsndUcEcLBphr wF8qzdeChPh31DvundZ9zi5qtOqA40l44bNeMKtHitX9ZHhMxujVuIqA+036S8/goZ1j vLIOo0Ix5dWuSu+hAYpKwivOGj4KBbN/ESzV6Wzn4PB5tmx3hXBp8OyIaFCHPYSd5Gtw l2M/OCbhBvVBlCiwdxbmOVWbO8wBT/qCs87I28E/k3ICrYVp0MYrLVrPY9HXGDXQTUVZ fRfTau3+O7hetKOMjIuUFlNpGGAOZ2O2k39g7y5peCf3j2lgHkP1p70wOjJAXbF0UB49 6jWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=3PlA7Mth68jhopHWmrxPfkCHCdg0051BymF8eBbsUZY=; fh=iR72bkUDZjmDJfVotkz9PvtoBMkXn+Y+GxxzUirbvj4=; b=RZlnjM+uUHDS1rFKn/ySpcY7PxPqBD1gZ9ntOM8m6gvPsgXSPH5+g8gNrP/5ua1P1F +4UKsUgs8KnR9cdLyrG7TibcMJD9JibyczFXgxvOtZB61+Kv3d8Olv4a7s9DB9E2ifuT USLN/6nrSRtvoJW/Tkd7LQQjMxsRFYG0If9xMrMuy/pyWbYZbHyDd/63lW4Qw4fhleSK ADPE8MI8n2yjQE/Z/S/ZIgCamOPbMVRo0iRd2f9zIK5KlSgaP6c2J3wDHJPIx92mUxsK K9yz3A+4Dw7yuaQJ0KyWU1RmdWFClvcxV2/P6GIGULYRbKfbyXQKR+5ChrCIFEeLKs4l Vpng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=EtBB6bOA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id u7-20020a17090341c700b001c5d09c7b0dsi1454209ple.458.2023.10.20.02.24.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Oct 2023 02:24:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=EtBB6bOA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 9E51F808A41C; Fri, 20 Oct 2023 02:24:04 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376619AbjJTJXn (ORCPT + 99 others); Fri, 20 Oct 2023 05:23:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376546AbjJTJXl (ORCPT ); Fri, 20 Oct 2023 05:23:41 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D0E0AB; Fri, 20 Oct 2023 02:23:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697793818; x=1729329818; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=qiYw9KhzxB4orQaqBs1XQfjHHQ9JFB0GJUWCdGReTzM=; b=EtBB6bOAp8QVY4NadktjiIPIKGtePsirNnZk7KPEb6q0dTPLCszat1ff 6R6JluyxGkcnJOmF0kL9x+J5FzKZcUcHLRAqBzX3H+tp87g0Q0Ima/BeO jN6u4dY/m2uSLL6RxnyqkcDgI5YXcYCGZWqyV/37ZocunYCBbCIJvQMbP 1yYiMwkNCJq3wzKFBMXntzAryfFwWD4ofKvAllJtJAjF3K1BY2yIQR6Lh sRSZ3e/lkYbG7seG39eEA7VyUwws30AD6b59pGWhKGKqC460IVG3/oaZg vAQf2HqeZl5gBK8FiQcpgQa4kU1GiqBo4tIaGZts1rRLXrMV74kyAS2R5 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10868"; a="390347683" X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="390347683" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:23:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,238,1694761200"; d="scan'208";a="5326783" Received: from ahunter6-mobl1.ger.corp.intel.com (HELO [10.0.2.15]) ([10.251.209.150]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2023 02:23:36 -0700 Message-ID: <9c1c6352-c392-4d14-8907-860e6c15c654@intel.com> Date: Fri, 20 Oct 2023 12:23:32 +0300 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 2/2] Implement SDHCI CQE support for DesignWare SDHCI. Content-Language: en-US To: Sergey Khimich , linux-kernel@vger.kernel.org Cc: linux-mmc@vger.kernel.org, Ulf Hansson , Philipp Zabel , Shawn Lin , Jyan Chou References: <20231002113301.1531717-1-serghox@gmail.com> <20231002113301.1531717-3-serghox@gmail.com> <0932b124-16da-495c-9706-bbadadb3b076@intel.com> <0a3f634b-62d4-475d-9cb0-d7eb5ee12b17@gmail.com> From: Adrian Hunter Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki In-Reply-To: <0a3f634b-62d4-475d-9cb0-d7eb5ee12b17@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Fri, 20 Oct 2023 02:24:04 -0700 (PDT) On 16/10/23 14:42, Sergey Khimich wrote: > Hello Adrian! > > Thanks for review and comments! There are few questions I`d like to clarify: > > On 09.10.2023 18:39, Adrian Hunter wrote: >> On 2/10/23 14:33, Sergey Khimich wrote: >>> From: Sergey Khimich >>> >>> For enabling CQE support just set 'supports-cqe' in your DevTree file >>> for appropriate mmc node. >>> >>> Signed-off-by: Sergey Khimich >>> --- >>>   drivers/mmc/host/Kconfig            |   1 + >>>   drivers/mmc/host/sdhci-of-dwcmshc.c | 233 +++++++++++++++++++++++++++- >>>   2 files changed, 232 insertions(+), 2 deletions(-) >>> >>> diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig >>> index 554e67103c1a..f3380b014ca9 100644 >>> --- a/drivers/mmc/host/Kconfig >>> +++ b/drivers/mmc/host/Kconfig >>> @@ -233,6 +233,7 @@ config MMC_SDHCI_OF_DWCMSHC >>>       depends on MMC_SDHCI_PLTFM >>>       depends on OF >>>       depends on COMMON_CLK >>> +    select MMC_CQHCI >>>       help >>>         This selects Synopsys DesignWare Cores Mobile Storage Controller >>>         support. >>> diff --git a/drivers/mmc/host/sdhci-of-dwcmshc.c b/drivers/mmc/host/sdhci-of-dwcmshc.c >>> index 3a3bae6948a8..7d43ae011811 100644 >>> --- a/drivers/mmc/host/sdhci-of-dwcmshc.c >>> +++ b/drivers/mmc/host/sdhci-of-dwcmshc.c >>> @@ -20,6 +20,7 @@ >>>   #include >>>     #include "sdhci-pltfm.h" >>> +#include "cqhci.h" >>>     #define SDHCI_DWCMSHC_ARG2_STUFF    GENMASK(31, 16) >>>   @@ -36,6 +37,9 @@ >>>   #define DWCMSHC_ENHANCED_STROBE        BIT(8) >>>   #define DWCMSHC_EMMC_ATCTRL        0x40 >>>   +/* DWC IP vendor area 2 pointer */ >>> +#define DWCMSHC_P_VENDOR_AREA2        0xea >>> + >>>   /* Rockchip specific Registers */ >>>   #define DWCMSHC_EMMC_DLL_CTRL        0x800 >>>   #define DWCMSHC_EMMC_DLL_RXCLK        0x804 >>> @@ -75,6 +79,10 @@ >>>   #define BOUNDARY_OK(addr, len) \ >>>       ((addr | (SZ_128M - 1)) == ((addr + len - 1) | (SZ_128M - 1))) >>>   +#define DWCMSHC_SDHCI_CQE_TRNS_MODE    (SDHCI_TRNS_MULTI | \ >>> +                     SDHCI_TRNS_BLK_CNT_EN | \ >>> +                     SDHCI_TRNS_DMA) >>> + >>>   enum dwcmshc_rk_type { >>>       DWCMSHC_RK3568, >>>       DWCMSHC_RK3588, >>> @@ -90,7 +98,8 @@ struct rk35xx_priv { >>>     struct dwcmshc_priv { >>>       struct clk    *bus_clk; >>> -    int vendor_specific_area1; /* P_VENDOR_SPECIFIC_AREA reg */ >>> +    int vendor_specific_area1; /* P_VENDOR_SPECIFIC_AREA1 reg */ >>> +    int vendor_specific_area2; /* P_VENDOR_SPECIFIC_AREA2 reg */ >>>       void *priv; /* pointer to SoC private stuff */ >>>   }; >>>   @@ -210,6 +219,147 @@ static void dwcmshc_hs400_enhanced_strobe(struct mmc_host *mmc, >>>       sdhci_writel(host, vendor, reg); >>>   } >>>   +static u32 dwcmshc_cqe_irq_handler(struct sdhci_host *host, u32 intmask) >>> +{ >>> +    int cmd_error = 0; >>> +    int data_error = 0; >>> + >>> +    if (!sdhci_cqe_irq(host, intmask, &cmd_error, &data_error)) >>> +        return intmask; >>> + >>> +    cqhci_irq(host->mmc, intmask, cmd_error, data_error); >>> + >>> +    return 0; >>> +} >>> + >>> +static void dwcmshc_sdhci_cqe_enable(struct mmc_host *mmc) >>> +{ >>> +    struct sdhci_host *host = mmc_priv(mmc); >>> +    u32 pstate; >>> +    u8 ctrl; >>> +    int count = 10; >>> + >>> +    /* >>> +     * CQE gets stuck if it sees Buffer Read Enable bit set, which can be >>> +     * the case after tuning, so ensure the buffer is drained. >>> +     */ >>> +    pstate = sdhci_readl(host, SDHCI_PRESENT_STATE); >>> +    while (pstate & SDHCI_DATA_AVAILABLE) { >>> +        sdhci_readl(host, SDHCI_BUFFER); >>> +        pstate = sdhci_readl(host, SDHCI_PRESENT_STATE); >>> +        if (count-- == 0) { >>> +            dev_warn(mmc_dev(host->mmc), >>> +                 "CQE may get stuck because the Buffer Read Enable bit is set\n"); >>> +            break; >>> +        } >>> +        mdelay(1); >>> +    } >> An alternative, which might be easier, is to do a >> data reset which may also help allow the device to >> subsequently enter low power states. >> Refer commit f8870ae6e2d6be75b1accc2db981169fdfbea7ab >> and commit 7b7d57fd1b773d25d8358c6017592b4928bf76ce > > Thanks, I'll fix it in the next version of the patch. > >> >>> + >>> +    sdhci_writew(host, DWCMSHC_SDHCI_CQE_TRNS_MODE, SDHCI_TRANSFER_MODE); >>> + >>> +    sdhci_cqe_enable(mmc); >>> + >>> +    /* >>> +     * The "DesignWare Cores Mobile Storage Host Controller >>> +     * DWC_mshc / DWC_mshc_lite Databook" says: >>> +     * when Host Version 4 Enable" is 1 in Host Control 2 register, >>> +     * SDHCI_CTRL_ADMA32 bit means ADMA2 is selected. >>> +     * Selection of 32-bit/64-bit System Addressing: >>> +     * either 32-bit or 64-bit system addressing is selected by >>> +     * 64-bit Addressing bit in Host Control 2 register. >>> +     * >>> +     * On the other hand the "DesignWare Cores Mobile Storage Host >>> +     * Controller DWC_mshc / DWC_mshc_lite User Guide" says, that we have to >>> +     * set DMA_SEL to ADMA2 _only_ mode in the Host Control 2 register. >>> +     */ >>> +    ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); >>> +    ctrl &= ~SDHCI_CTRL_DMA_MASK; >>> +    ctrl |= SDHCI_CTRL_ADMA32; >>> +    sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); >>> +} >>> + >>> +static void dwcmshc_sdhci_cqe_disable(struct mmc_host *mmc, bool recovery) >>> +{ >>> +    /* >>> +     * If an ioctl was issued, cqe_disable will be called. >>> +     * For CQE of sdhci-of-dwcmshc, the previous in-flight cmd will be lost quietly. >>> +     * So wait for mmc idle state. >> This sounds like it should be fixed in the mmc block driver. >> Can you provide an example of when this happens? > Unfortunately I can't provide an example. > But this part of the patch is updated by me on the advice of Shawn Lin after his reviewing V1 of the patch. > Please find his message here: > https://patchwork.kernel.org/project/linux-mmc/patch/20230825143525.869906-2-serghox@gmail.com/ > > Just in case here I also quote the part of his comment that refers to this part of the patch: > "And another issue was found when Rockchip added CQE support for > sdhci-of-dwcmshc internally, is that if a ioctl was issued, cqe_disable > will be called. For CQE of sdhci-of-dwcmshc, the previous in-flight cmd > will be lost quietly. So a mmc->cqe_ops->cqe_wait_for_idle(mmc) should > be added before sdhci_cqe_disable(), so you need a dwcmshc specified > cqe_disable hook in sdhci-of-dwcmshc." mmc block driver already does a wait for idle before ioctl commands, refer mmc_blk_mq_issue_rq() case MMC_ISSUE_SYNC. Without more informaton we cannot assume the upstream kernel has a problem with this. > >> >>> +     */ >>> +    mmc->cqe_ops->cqe_wait_for_idle(mmc); >>> + >>> +    return sdhci_cqe_disable(mmc, recovery); >>> +} >>> + >>> +static void dwcmshc_cqhci_set_tran_desc(u8 *desc, dma_addr_t addr, int len, bool end, >>> +                    bool dma64) >>> +{ >>> +    __le32 *attr = (__le32 __force *)desc; >>> + >>> +    *attr = (CQHCI_VALID(1) | >>> +         CQHCI_END(end ? 1 : 0) | >>> +         CQHCI_INT(0) | >>> +         CQHCI_ACT(0x4) | >>> +         CQHCI_DAT_LENGTH(len)); >>> + >>> +    if (dma64) { >>> +        __le64 *dataddr = (__le64 __force *)(desc + 4); >>> + >>> +        dataddr[0] = cpu_to_le64(addr); >>> +    } else { >>> +        __le32 *dataddr = (__le32 __force *)(desc + 4); >>> + >>> +        dataddr[0] = cpu_to_le32(addr); >>> +    } >>> +} >> This is the same as cqhci_set_tran_desc(). Might as well export that >> instead. > Thanks, I'll fix it in the next version of the patch. >>> + >>> +static void dwcmshc_cqhci_prep_tran_desc(struct mmc_data *data, >>> +                     struct cqhci_host *cq_host, >>> +                     u8 *desc, int sg_count) >>> +{ >>> +    int i, len, tmplen, offset; >>> +    bool end = false; >>> +    bool dma64 = cq_host->dma64; >>> +    dma_addr_t addr; >>> +    struct scatterlist *sg; >>> + >>> +    for_each_sg(data->sg, sg, sg_count, i) { >>> +        addr = sg_dma_address(sg); >>> +        len = sg_dma_len(sg); >>> + >>> +        /* >>> +         * According to the "DesignWare Cores Mobile Storage Host Controller >>> +         * DWC_mshc / DWC_mshc_lite Databook" the host memory data buffer size >>> +         * and start address must not exceed 128 Mb. If it exceeds, >>> +         * the data buffer must be split using two descritors. >>> +         */ >>> + >>> +        if (likely(BOUNDARY_OK(addr, len))) { >>> +            if ((i + 1) == sg_count) >>> +                end = true; >>> +            dwcmshc_cqhci_set_tran_desc(desc, addr, len, end, dma64); >>> +            desc += cq_host->trans_desc_len; >>> +        } else { >>> +            offset = addr & (SZ_128M - 1); >>> +            tmplen = SZ_128M - offset; >>> +            dwcmshc_cqhci_set_tran_desc(desc, addr, tmplen, end, dma64); >>> +            desc += cq_host->trans_desc_len; >>> + >>> +            if ((i + 1) == sg_count) >>> +                end = true; >>> + >>> +            addr += tmplen; >>> +            len -= tmplen; >>> +            dwcmshc_cqhci_set_tran_desc(desc, addr, len, end, dma64); >>> +            desc += cq_host->trans_desc_len; >>> +        } >>> +    } >>> +} >> Could this be done more like dwcmshc_adma_write_desc() > Actually I'm not sure what do you mean. I reused checking boundary construction with > "BOUNDARY_OK" macro and used the same variable names. I would appreciate it if you could clarify > what does mean "more like dwcmshc_adma_write_desc()" Provide a hook for cqhci_set_tran_desc() instead of cqhci_prep_tran_desc() You'll need to check the details, but something like: diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c index b3d7d6d8d654..98e7e9d3030d 100644 --- a/drivers/mmc/host/cqhci-core.c +++ b/drivers/mmc/host/cqhci-core.c @@ -522,7 +522,10 @@ static int cqhci_prep_tran_desc(struct mmc_request *mrq, if ((i+1) == sg_count) end = true; - cqhci_set_tran_desc(desc, addr, len, end, dma64); + if (cq_host->ops->set_tran_desc) + cq_host->ops->set_tran_desc(&desc, addr, len, end, dma64); + else + cqhci_set_tran_desc(desc, addr, len, end, dma64); desc += cq_host->trans_desc_len; } And: static void dwcmshc_set_tran_desc(u8 **desc, dma_addr_t addr, int len, bool end, bool dma64) { int tmplen, offset; if (likely(!len || BOUNDARY_OK(addr, len))) { cqhci_set_tran_desc(*desc, addr, len, end, dma64); return; } offset = addr & (SZ_128M - 1); tmplen = SZ_128M - offset; cqhci_set_tran_desc(*desc, addr, tmplen, false, dma64); addr += tmplen; len -= tmplen; *desc += cq_host->trans_desc_len; cqhci_set_tran_desc(*desc, addr, len, end, dma64); } >> >>> + >>> +static void dwcmshc_cqhci_dumpregs(struct mmc_host *mmc) >>> +{ >>> +    sdhci_dumpregs(mmc_priv(mmc)); >>> +} >>> + >>>   static void dwcmshc_rk3568_set_clock(struct sdhci_host *host, unsigned int clock) >>>   { >>>       struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); >>> @@ -345,6 +495,7 @@ static const struct sdhci_ops sdhci_dwcmshc_ops = { >>>       .get_max_clock        = dwcmshc_get_max_clock, >>>       .reset            = sdhci_reset, >>>       .adma_write_desc    = dwcmshc_adma_write_desc, >>> +    .irq            = dwcmshc_cqe_irq_handler, >>>   }; >>>     static const struct sdhci_ops sdhci_dwcmshc_rk35xx_ops = { >>> @@ -379,6 +530,70 @@ static const struct sdhci_pltfm_data sdhci_dwcmshc_rk35xx_pdata = { >>>              SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, >>>   }; >>>   +static const struct cqhci_host_ops dwcmshc_cqhci_ops = { >>> +    .enable        = dwcmshc_sdhci_cqe_enable, >>> +    .disable    = dwcmshc_sdhci_cqe_disable, >>> +    .dumpregs    = dwcmshc_cqhci_dumpregs, >>> +    .prep_tran_desc    = dwcmshc_cqhci_prep_tran_desc, >>> +}; >>> + >>> +static void dwcmshc_cqhci_init(struct sdhci_host *host, struct platform_device *pdev) >>> +{ >>> +    struct cqhci_host *cq_host; >>> +    struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); >>> +    struct dwcmshc_priv *priv = sdhci_pltfm_priv(pltfm_host); >>> +    bool dma64 = false; >>> +    u16 clk; >>> +    int err; >>> + >>> +    host->mmc->caps2 |= MMC_CAP2_CQE | MMC_CAP2_CQE_DCMD; >>> +    cq_host = devm_kzalloc(&pdev->dev, sizeof(*cq_host), GFP_KERNEL); >>> +    if (!cq_host) { >>> +        dev_err(mmc_dev(host->mmc), "Unable to setup CQE: not enough memory\n"); >>> +        return; >>> +    } >>> + >>> +    /* >>> +     * For dwcmshc host controller we have to enable internal clock >>> +     * before access to some registers from Vendor Specific Aria 2. >>> +     */ >>> +    clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); >>> +    clk |= SDHCI_CLOCK_INT_EN; >>> +    sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); >>> +    clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); >>> +    if (!(clk & SDHCI_CLOCK_INT_EN)) { >>> +        dev_err(mmc_dev(host->mmc), "Unable to setup CQE: internal clock enable error\n"); >>> +        goto free_cq_host; >>> +    } >>> + >>> +    cq_host->mmio = host->ioaddr + priv->vendor_specific_area2; >>> +    cq_host->ops = &dwcmshc_cqhci_ops; >>> + >>> +    /* Enable using of 128-bit task descriptors */ >>> +    dma64 = host->flags & SDHCI_USE_64_BIT_DMA; >>> +    if (dma64) { >>> +        dev_dbg(mmc_dev(host->mmc), "128-bit task descriptors\n"); >>> +        cq_host->caps |= CQHCI_TASK_DESC_SZ_128; >>> +    } >>> +    err = cqhci_init(cq_host, host->mmc, dma64); >>> +    if (err) { >>> +        dev_err(mmc_dev(host->mmc), "Unable to setup CQE: error %d\n", err); >>> +        goto int_clok_disable; >>> +    } >>> + >>> +    dev_dbg(mmc_dev(host->mmc), "CQE init done\n"); >>> + >>> +    return; >>> + >>> +int_clok_disable: >> 'clok' is an odd abbreviation of 'clock'.  Perhaps 'clk' or just 'clock' > Thanks, I'll fix it in the next version of the patch. >> >>> +    clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); >>> +    clk &= ~SDHCI_CLOCK_INT_EN; >>> +    sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); >>> + >>> +free_cq_host: >>> +    devm_kfree(&pdev->dev, cq_host); >>> +} >>> + >>>   static int dwcmshc_rk35xx_init(struct sdhci_host *host, struct dwcmshc_priv *dwc_priv) >>>   { >>>       int err; >>> @@ -471,7 +686,7 @@ static int dwcmshc_probe(struct platform_device *pdev) >>>       struct rk35xx_priv *rk_priv = NULL; >>>       const struct sdhci_pltfm_data *pltfm_data; >>>       int err; >>> -    u32 extra; >>> +    u32 extra, caps; >>>         pltfm_data = device_get_match_data(&pdev->dev); >>>       if (!pltfm_data) { >>> @@ -519,6 +734,8 @@ static int dwcmshc_probe(struct platform_device *pdev) >>>         priv->vendor_specific_area1 = >>>           sdhci_readl(host, DWCMSHC_P_VENDOR_AREA1) & DWCMSHC_AREA1_MASK; >>> +    priv->vendor_specific_area2 = >>> +        sdhci_readw(host, DWCMSHC_P_VENDOR_AREA2); >>>         host->mmc_host_ops.request = dwcmshc_request; >>>       host->mmc_host_ops.hs400_enhanced_strobe = dwcmshc_hs400_enhanced_strobe; >>> @@ -547,6 +764,10 @@ static int dwcmshc_probe(struct platform_device *pdev) >>>           sdhci_enable_v4_mode(host); >>>   #endif >>>   +    caps = sdhci_readl(host, SDHCI_CAPABILITIES); >>> +    if (caps & SDHCI_CAN_64BIT_V4) >>> +        sdhci_enable_v4_mode(host); >>> + >>>       host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY; >>>         pm_runtime_get_noresume(dev); >>> @@ -557,6 +778,14 @@ static int dwcmshc_probe(struct platform_device *pdev) >>>       if (err) >>>           goto err_rpm; >>>   +    /* Setup Command Queue Engine if enabled */ >>> +    if (device_property_read_bool(&pdev->dev, "supports-cqe")) { >>> +        if (caps & SDHCI_CAN_64BIT_V4) >>> +            dwcmshc_cqhci_init(host, pdev); >>> +        else >>> +            dev_warn(dev, "Cannot enable CQE without V4 mode support\n"); >>> +    } >>> + >>>       if (rk_priv) >>>           dwcmshc_rk35xx_postinit(host, priv); >>>