Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751213AbdFANeR (ORCPT ); Thu, 1 Jun 2017 09:34:17 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:32582 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751089AbdFANeO (ORCPT ); Thu, 1 Jun 2017 09:34:14 -0400 Subject: Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory To: Julien Grall , xen-devel@lists.xen.org References: <20170531130357.14492-1-julien.grall@arm.com> <7199e366-f56a-acc8-ffa5-0c85d6975049@oracle.com> <592886a8-1443-6475-e318-85cb9acecead@arm.com> Cc: sstabellini@kernel.org, jgross@suse.com, linux-kernel@vger.kernel.org, stable@vger.kernel.org, Feng Kan From: Boris Ostrovsky Message-ID: Date: Thu, 1 Jun 2017 09:33:52 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <592886a8-1443-6475-e318-85cb9acecead@arm.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2601 Lines: 76 On 06/01/2017 08:50 AM, Julien Grall wrote: > Hi Boris, > > On 31/05/17 14:54, Boris Ostrovsky wrote: >> On 05/31/2017 09:03 AM, Julien Grall wrote: >>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page >>> granularity" did >>> not go far enough to support 64KB in mmap_batch_fn. >>> >>> The variable 'nr' is the number of 4KB chunk to map. However, when >>> Linux >>> is using 64KB page granularity the array of pages >>> (vma->vm_private_data) >>> contain one page per 64KB. Fix it by incrementing st->index correctly. >>> >>> Furthermore, st->va is not correctly incremented as PAGE_SIZE != >>> XEN_PAGE_SIZE. >>> >>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page >>> granularity") >>> CC: stable@vger.kernel.org >>> Reported-by: Feng Kan >>> Signed-off-by: Julien Grall >>> --- >>> drivers/xen/privcmd.c | 4 ++-- >>> 1 file changed, 2 insertions(+), 2 deletions(-) >>> >>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c >>> index 7a92a5e1d40c..feca75b07fdd 100644 >>> --- a/drivers/xen/privcmd.c >>> +++ b/drivers/xen/privcmd.c >>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr, >>> void *state) >>> st->global_error = 1; >>> } >>> } >>> - st->va += PAGE_SIZE * nr; >>> - st->index += nr; >>> + st->va += XEN_PAGE_SIZE * nr; >>> + st->index += nr / XEN_PFN_PER_PAGE; >>> >>> return 0; >>> } >> >> >> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()? > > Do you mean in the xen_xlate_remap_gfn_array implementation? If so > there are no use of PAGE_MASK as the code has been converted to > support 64K page granularity. > > If you mean the x86 version of xen_remap_domain_gfn_array, then we > don't really care as x86 only use 4KB page granularity. I meant right above the change that you made. Should it also be replaced with XEN_PAGE_MASK? (Sorry for being unclear.) ==> ret = xen_remap_domain_gfn_array(st->vma, st->va & PAGE_MASK, gfnp, nr, (int *)gfnp, st->vma->vm_page_prot, st->domain, cur_pages); /* Adjust the global_error? */ if (ret != nr) { if (ret == -ENOENT) st->global_error = -ENOENT; else { /* Record that at least one error has happened. */ if (st->global_error == 0) st->global_error = 1; } } st->va += PAGE_SIZE * nr; st->index += nr;