Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753725AbbGUNJS (ORCPT ); Tue, 21 Jul 2015 09:09:18 -0400 Received: from smtp.citrix.com ([66.165.176.89]:15589 "EHLO SMTP.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754421AbbGUNJQ (ORCPT ); Tue, 21 Jul 2015 09:09:16 -0400 X-IronPort-AV: E=Sophos;i="5.15,516,1432598400"; d="scan'208";a="282902381" Message-ID: <55AE442A.6080506@citrix.com> Date: Tue, 21 Jul 2015 14:07:54 +0100 From: Julien Grall User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.7.0 MIME-Version: 1.0 To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= , CC: , , "Julien Grall" , , David Vrabel , Boris Ostrovsky , Subject: Re: [Xen-devel] [PATCH v2 15/20] block/xen-blkfront: Make it running on 64KB page granularity References: <1436474552-31789-1-git-send-email-julien.grall@citrix.com> <1436474552-31789-16-git-send-email-julien.grall@citrix.com> <55AE27C2.8090803@citrix.com> In-Reply-To: <55AE27C2.8090803@citrix.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit X-DLP: MIA2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3723 Lines: 92 Hi Roger, On 21/07/15 12:06, Roger Pau Monné wrote: > El 09/07/15 a les 22.42, Julien Grall ha escrit: >> From: Julien Grall >> >> The PV block protocol is using 4KB page granularity. The goal of this >> patch is to allow a Linux using 64KB page granularity using block >> device on a non-modified Xen. >> >> The block API is using segment which should at least be the size of a > ^ segments Will fix it. >> Linux page. Therefore, the driver will have to break the page in chunk > chunks ^ >> of 4K before giving the page to the backend. >> >> Breaking a 64KB segment in 4KB chunk will result to have some chunk with >> no data. > > I would rewrite this as: > > Breaking a 64KB page into 4KB chunks can result in chunks with no data. Sounds good. >> As the PV protocol always require to have data in the chunk, we >> have to count the number of Xen page which will be in use and avoid to > ^ pages remove the "to" ^ >> sent empty chunk. > ^ sending empty chunks >> >> Note that, a pre-defined number of grant is reserved before preparing > ^ no coma ^ grants are >> the request. This pre-defined number is based on the number and the >> maximum size of the segments. If each segment contain a very small > ^ contains >> amount of data, the driver may reserve too much grant (16 grant is > ^ many grants? ^ grants are >> reserved per segment with 64KB page granularity). >> >> Futhermore, in the case of persistent grant we allocate one Linux page > ^ Furthermore ^ case of using persistent grants >> per grant although only the 4KB of the page will be effectively use. > ^ initial ^ used. >> This could be improved by share the page with multiple grants. > ^ sharing the page between Will fix all the typoes, grammatical error in the next version. >> >> Signed-off-by: Julien Grall >> Cc: Konrad Rzeszutek Wilk >> Cc: Roger Pau Monné >> Cc: Boris Ostrovsky >> Cc: David Vrabel > > This looks much better now, thanks. > > Acked-by: Roger Pau Monné Thank you! >> BUG_ON(info->max_indirect_segments == 0 && >> - req->nr_phys_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST); >> + (XEN_PAGES_PER_SEGMENT * req->nr_phys_segments) > BLKIF_MAX_SEGMENTS_PER_REQUEST); >> BUG_ON(info->max_indirect_segments && >> - req->nr_phys_segments > info->max_indirect_segments); >> + (req->nr_phys_segments * XEN_PAGES_PER_SEGMENT) > info->max_indirect_segments); >> nseg = blk_rq_map_sg(req->q, req, info->shadow[id].sg); >> + nr_page = 0; >> + /* Calculate the number of Xen pages used */ >> + for_each_sg(info->shadow[id].sg, sg, nseg, i) { >> + nr_page += (round_up(sg->offset + sg->length, XEN_PAGE_SIZE) - round_down(sg->offset, XEN_PAGE_SIZE)) >> XEN_PAGE_SHIFT; > > I haven't counted the characters, but this line looks too long, also you > can get rid of the braces since it's a single line statement. I was planning to use gnttab_count_grant added in patch #3 but forgot to do it. I will do in the next version. Regards, -- Julien Grall -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/