Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752842AbbKLM6Y (ORCPT ); Thu, 12 Nov 2015 07:58:24 -0500 Received: from mout.kundenserver.de ([212.227.17.24]:64176 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752734AbbKLM6X (ORCPT ); Thu, 12 Nov 2015 07:58:23 -0500 From: Arnd Bergmann To: Baolin Wang Cc: Jan Kara , Christoph Hellwig , axboe@kernel.dk, Alasdair G Kergon , Mike Snitzer , dm-devel@redhat.com, neilb@suse.com, tj@kernel.org, jmoyer@redhat.com, keith.busch@intel.com, bart.vanassche@sandisk.com, linux-raid@vger.kernel.org, Mark Brown , "Garg, Dinesh" , LKML Subject: Re: [PATCH 0/2] Introduce the request handling for dm-crypt Date: Thu, 12 Nov 2015 13:57:27 +0100 Message-ID: <4436790.rotQ3a7v5c@wuerfel> User-Agent: KMail/4.11.5 (Linux/3.16.0-10-generic; KDE/4.11.5; x86_64; ; ) In-Reply-To: References: <20151112122400.GB27454@quack.suse.cz> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Provags-ID: V03:K0:QYaut74QZ6IeyXQCOwj7pwBmWs+r+FEPZV0ETLvzArO90xy01pr ocRCvwEUZokKpB1rQw6PlUHgeWLpLAjKeiUSCQif5vLa7Oze22a1xDQkh5rJVVdqEabhQME DZN5FMiX0sGlaqKNHkxQJhfEoLUk8z3sHxDHzt7bTboDc+54LTJdkjzwnZ3i9lEMnXDjEvc QxfzQKTMqc0cpu48eIXSw== X-UI-Out-Filterresults: notjunk:1;V01:K0:MkjLuiU9f1k=:Di3EJF35cXWD+gBb0hTbA+ 31t0A98lnm3FtMyRyYT+kZ6Rd4LAoImx4YOzv79XcLZ2fsQjgC4Ve6rfeQlNQXXJMk1w/lY+8 In+9EVSP22mvMlPfHK2LZKp6KVNPkxi9oj3issYythN8VCX1+deuwNZ61iJYYBLsMszX3TelN rW4nh1ftnTfbZ2tXZyxZ6rAbmrjwEUFYKVC2d/5Dw+612XqN8Cv4qGW4ikrzNLPRREYhxoDl0 eGhhfCw7xLmr+g8SrwFKbcH1kIt4aEUrJkDsGa6MDWdKIZp+muLLv4as9wLZa0T3twY3gmiC1 YEYaEAK4+Vpsvkok212/tl4AMEAlghDwQwyA3P+UkYu9523eC99KR6QaC0o8N7tG6G506qHvM nwCOy8UuLk7M/Sao75DtfvgcoAxiix08DLP+Vhe3DvZUExP9BQIC0nhL8BqILMc0dLLaBGLbS GxKcR5qtBjJ0dktpsXVFec61TADSShU0qSL/DVgLHmnrwHitkjCUfZpmzmlNgROBly4hJ47on dyM0ARPhYtyevhShOUHyDNli+3RNwE59cAB/fQWZMipL42HrX6+b+HWhElRVp2rTrCo2XVoY/ 6qy3mrEw7gf4CVpJpjJ9ssJ6yRGpb99+OvuB2DBGMWa/LZlFGnUEGSR2M2xmULnha0+N6Tno/ pOahY8CUmY/cGS4tOgfuJyZ7vVJ80hx6k3+wqveRml8NS7AlMMAlWLv2eUHa2KyGj5gZyGRUz cqoSq2/HTwB/60el Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3286 Lines: 64 On Thursday 12 November 2015 20:51:10 Baolin Wang wrote: > On 12 November 2015 at 20:24, Jan Kara wrote: > > On Thu 12-11-15 19:46:26, Baolin Wang wrote: > >> On 12 November 2015 at 19:06, Jan Kara wrote: > >> > Well, one question is "can handle" and other question is how big gain in > >> > throughput it will bring compared to say 1M chunks. I suppose there's some > >> > constant overhead to issue a request to the crypto hw and by the time it is > >> > encrypting 1M it may be that this overhead is well amortized by the cost of > >> > the encryption itself which is in principle linear in the size of the > >> > block. That's why I'd like to get idea of the real numbers... > >> > >> Please correct me if I misunderstood your point. Let's suppose the AES > >> engine can handle 16M at one time. If we give the size of data is less > >> than 16M, the engine can handle it at one time. But if the data size > >> is 20M (more than 16M), the engine driver will split the data with 16M > >> and 4M to deal with. I can not say how many numbers, but I think the > >> engine is like to big chunks than small chunks which is the hardware > >> engine's advantage. > > > > No, I meant something different. I meant that if HW can encrypt 1M in say > > 1.05 ms and it can encrypt 16M in 16.05 ms, then although using 16 M blocks > > gives you some advantage it becomes diminishingly small. > > > > But if it encrypts 16M with 1M one by one, it will be much more than > 16.05ms (should be consider the SW submits bio one by one). The example that Jan gave was meant to illustrate the case where it's not much more than 16.05ms, just slightly more. The point is that we need real numbers to show at what size we stop getting significant returns from increased block sizes. > >> >> > You mentioned that you use requests because of size limitations on bios - I > >> >> > had a look and current struct bio can easily describe 1MB requests (that's > >> >> > assuming 64-bit architecture, 4KB pages) when we have 1 page worth of > >> >> > struct bio_vec. Is that not enough? > >> >> > >> >> Usually one bio does not always use the full 1M, maybe some 1k/2k/8k > >> >> or some other small chunks. But request can combine some sequential > >> >> small bios to be a big block and it is better than bio at least. > >> > > >> > As Christoph mentions 4.3 should be better in submitting larger bios. Did > >> > you check it? > >> > >> I'm sorry I didn't check it. What's the limitation of one bio on 4.3? > > > > On 4.3 it is 1 MB (which should be enough because requests are limited to > > 512 KB by default anyway). Previously the maximum bio size depended on the > > queue parameters such as max number of segments etc. > > But it maybe not enough for HW engine which can handle maybe 10M/20M > at one time. Given that you have already done measurements, can you find out how much you lose in overall performance with your existing patch if you artificially limit the maximum size to sizes like 256kb, 1MB, 4MB, ...? Arnd -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/