Received: by 10.192.165.148 with SMTP id m20csp3090183imm; Sun, 22 Apr 2018 23:48:33 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/eFIO3hVPW2DCdBTKRVWimP9MQa/xb9g13JqOBteSdON9/0dGkUN1/yRDK6CWAJUB6JlGx X-Received: by 2002:a17:902:2927:: with SMTP id g36-v6mr16754402plb.303.1524466113634; Sun, 22 Apr 2018 23:48:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524466113; cv=none; d=google.com; s=arc-20160816; b=YrQvROImOECFVu4oxyLzEp/F+4+eTFqHmsnIt3SbyN7wU7rXyRUqaG8De3YzyVl7Sy KWgjchHSuXmzhqX7yhVg/coN+uVje2sHiW4eZgG3XNddqLz5EFv/n7PuQ7L+XoOz4YPf P1IIaC8IdI1gqfra92ReTz6iqrgBLmfb6Hc5DRp2o5lpl1Phi+xFDu4tcPxLm2qP6U3J xBB84GAkyTjiX0CNOTaxDFyHv9sm1n1rWOAXlZsHNLglNDvMnyFf5et+tFSXp5mfRgGn FJ6g1fqSE85mmgjy9DPRIasM5fTZDViBZ9QI2EGb4y2yxU/1tk0NgyE8VQaD6YFm4/A0 M3SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=jiMDKqHlNj+8jBmTC913tE+7HtzFFU9cODUEWLFNUf0=; b=Bw+nWgKYODQokxO+QVkt/FxEAUVK47LsDfZGCJHEKATbkjm3mgBgrGTUhl8UBbYUOt Z2VYDvjTMFxdYGHL/zfxQ8cx2DYEtYjdtkCs+Nqk7wRWS9LjBY16s6A2IkX/RQUUVwOS ah4Cc51X64BPiUiUAd2uCoxhWkVpllZivTJzEt26OYchbVkOloSJtZfGHexKplC1SZpk haJWjD6y0XQ2pc/tLwN3/p8jdrIiNTTQwYkVLOpxCt1SwsnYHLxCLQ9eqdEou6P+OW7u 5QhWqZKqnhCwPJyoshok24Q6x6XQazHG2dGzfbIO9XKsU5+tfbtaT9xES0caj2OFEQIS kETQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Lv4naWdq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d22-v6si11429794plr.581.2018.04.22.23.48.19; Sun, 22 Apr 2018 23:48:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Lv4naWdq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754045AbeDWGrM (ORCPT + 99 others); Mon, 23 Apr 2018 02:47:12 -0400 Received: from mail-it0-f49.google.com ([209.85.214.49]:33050 "EHLO mail-it0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751503AbeDWGrK (ORCPT ); Mon, 23 Apr 2018 02:47:10 -0400 Received: by mail-it0-f49.google.com with SMTP id x144-v6so8526488itc.0 for ; Sun, 22 Apr 2018 23:47:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=jiMDKqHlNj+8jBmTC913tE+7HtzFFU9cODUEWLFNUf0=; b=Lv4naWdqiA5ZwkC28eD5ksUW6bn7mQtVKz0eLyeCeMSiQZfyrX7d5UFoG8YlR80qng 1gFd4N9KuP6/jCnNp2TmGTouyQOlLxr/H0ERxklHAoxxYVT7eNkxT9rgcsnsFWdAlbek JQCHGzSsJjYzQjYFQzXz1h9ricvQ/BGfIhoZg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=jiMDKqHlNj+8jBmTC913tE+7HtzFFU9cODUEWLFNUf0=; b=d7HUDkEDy6C4qDCqU9H2od/BCKnrRiCSDUFBgQ8YlK695u3xQpYQEdKVAui6ulqlDl uBQcqgH47ES+kFcxMEcYEOb7tjh10otxWgkZ+472Rqc3jtEh8iGnIWHkYOSGcAgo2duL qO6ja0EmSjcGPQpznvhr6xR3kDX4QC7C9hpIpSJOsv3oPubqBbMJ4q2yOoaHVWHqEgYa bpHMSHj3h3v9/zK3WbIuF1NzPjbVj2HeQgopJ1vUNkRm4qXuJ0k9a8Se6TVKxjj7VLx7 B7sG6qZR65kkB1rqUFeh65+R47yDWbeajHLqm9bRtGt0e2/zACnw/vnJUdRLmJ7yy45K 1yww== X-Gm-Message-State: ALQs6tAT1hwfENhf8CsdrhLLtVwmPAfdsbqTDzIb3sB8sOVJJfXb/aon +Xz2xtxp0K2Cbc+6spKhBtl+IQy9kxoryoNMnFIzQQ== X-Received: by 2002:a24:3941:: with SMTP id l62-v6mr12396467ita.55.1524466029675; Sun, 22 Apr 2018 23:47:09 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a02:734a:0:0:0:0:0 with HTTP; Sun, 22 Apr 2018 23:47:09 -0700 (PDT) In-Reply-To: <1524239609.10138.47.camel@synopsys.com> References: <20180417121130.25281-1-Eugeniy.Paltsev@synopsys.com> <1524239609.10138.47.camel@synopsys.com> From: Ulf Hansson Date: Mon, 23 Apr 2018 08:47:09 +0200 Message-ID: Subject: Re: [RFC 0/2] dw_mmc: add multislot support To: Eugeniy Paltsev Cc: "jh80.chung@samsung.com" , "linux-kernel@vger.kernel.org" , "Alexey.Brodkin@synopsys.com" , "linux-mmc@vger.kernel.org" , "krzk@kernel.org" , "linux-snps-arc@lists.infradead.org" , "kgene@kernel.org" , "shawn.lin@rock-chips.com" , "linux-arm-kernel@lists.infradead.org" , "linux-samsung-soc@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 20 April 2018 at 17:53, Eugeniy Paltsev wrote: > Hi Ulf, > > On Fri, 2018-04-20 at 09:35 +0200, Ulf Hansson wrote: >> [...] >> >> > >> > 2. Add missing stuff to support multislot mode in DesignWare MMC driver. >> > * Add missing slot switch to __dw_mci_start_request() function. >> > * Refactor set_ios function: >> > a) Calculate common clock which is >> > suitable for all slots instead of directly use clock value >> > provided by mmc core. We calculate common clock as the minimum >> > among each used slot clocks. This clock is calculated in >> > dw_mci_calc_common_clock() function which is called >> > from set_ios() >> > b) Disable clock only if no other slots are ON. >> > c) Setup clock directly in set_ios() only if no other slots >> > are ON. Otherwise adjust clock in __dw_mci_start_request() >> > function before slot switch. >> > d) Move timings and bus_width setup to separate funcions. >> > * Use timing field in each slot structure instead of common field in >> > host structure. >> > * Add locks to serialize access to registers. >> >> Sorry, but this is a hack to *try* to make multi-slot work and this >> isn't sufficient. There were good reasons to why the earlier >> non-working multi slot support was removed from dw_mmc. > > Previous multi slot implementation was removed as nobody used it and > nobody tested it. There are lots of mistakes in previous implementation > which are not related to request serialization > like lack of slot switch / lack of adding slot id to CIU commands / ets... > So obviously it was never tested and never used at real multi slot hardware. > >> Let me elaborate a bit for your understanding. The core uses a host >> lock (mmc_claim|release_host()) to serialize operations and commands, >> as to confirm to the SD/SDIO/(e)MMC specs. The above changes gives no >> guarantees for this. To make that work, we would need a "mmc bus lock" >> to be managed by the core. > > In current implementation data transfers and commands to different > hosts (slots) are serialized internally in the dw_mmc driver. We have > request queue and when .request() is called we add new request to the > queue. We take new request from the queue only if the previous one > has already finished. That isn't sufficient. The core expects all calls to *any* of the host ops to be serialized for one host. It does so to conform to the specs. For example it may call: ->set_ios() ->request() ->set_ios() ->request() ->request() > > So although hosts (slots) have separate locks (mmc_claim|release_host()) > the requests to different slots are serialized by driver. > > Isn't that enough? Sorry, but no. > I'm not very familiar with SD/SDIO/(e)MMC specs so my assumptions might be wrong > in that case please correct me. Well, that kind of explains your simplistic approach. I would suggest you to study the specs and the behavior of the mmc core a bit more carefully, that should give you a better understanding of the problems. > >> However, inventing a "mmc bus lock" would lead to other problems >> related to I/O scheduling for upper layers - it simply breaks. For >> example, I/O requests for one card/slot can then starve I/O requests >> reaching another card/slot. >> > > Nevertheless we had to deal somehow with existing hardware which > has multislot dw mmc controller and both slots are used... > > This patch at least shouldn't break anything for current users (which use > it in single slot mode) > > Moreover we tested this dual-slot implementation and don't catch any problems > (probably yet) except bus performance decrease in dual-slot mode (which is > quite expected). Honestly, I don't think efforts of implementing this is worth it! Even if we would be able to solve the problem from an mmc subsystem point of view, we would still have the I/O scheduling problem to address. To solve that, we would need to be able to configure upper block layer code to run one scheduling instance over multiple block devices... Kind regards Uffe