Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752314AbbBSSHi (ORCPT ); Thu, 19 Feb 2015 13:07:38 -0500 Received: from smarthost01d.mail.zen.net.uk ([212.23.1.7]:53786 "EHLO smarthost01d.mail.zen.net.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751597AbbBSSHh (ORCPT ); Thu, 19 Feb 2015 13:07:37 -0500 Message-ID: <54E62662.1050703@cantab.net> Date: Thu, 19 Feb 2015 18:07:30 +0000 From: David Vrabel User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Juergen Gross , linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com, konrad.wilk@oracle.com, david.vrabel@citrix.com, boris.ostrovsky@oracle.com References: <1424242326-26611-1-git-send-email-jgross@suse.com> <1424242326-26611-7-git-send-email-jgross@suse.com> In-Reply-To: <1424242326-26611-7-git-send-email-jgross@suse.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 82.70.146.43 X-SA-Exim-Mail-From: dvrabel@cantab.net Subject: Re: [Xen-devel] [PATCH 06/13] xen: detect pre-allocated memory interfering with e820 map X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on pear.davidvrabel.org.uk) X-Originating-smarthost01d-IP: [82.70.146.41] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4843 Lines: 125 On 18/02/2015 06:51, Juergen Gross wrote: > Currently especially for dom0 guest memory with guest pfns not > matching host areas populated with RAM are remapped to areas which > are RAM native as well. This is done to be able to use identity > mappings (pfn == mfn) for I/O areas. > > Up to now it is not checked whether the remapped memory is already > in use. Remapping used memory will probably result in data corruption, > as the remapped memory will no longer be reserved. Any memory > allocation after the remap can claim that memory. > > Add an infrastructure to check for conflicts of reserved memory areas > and in case of a conflict to react via an area specific function. > > This function has 3 options: > - Panic > - Handle the conflict by moving the data to another memory area. > This is indicated by a return value other than 0. > - Just return 0. This will delay invalidating the conflicting memory > area to just before doing the remap. This option will be usable for > cases only where the memory will no longer be needed when the remap > operation will be started, e.g. for the p2m list, which is already > copied then. > > When doing the remap, check for not remapping a reserved page. > > Signed-off-by: Juergen Gross > --- > arch/x86/xen/setup.c | 185 +++++++++++++++++++++++++++++++++++++++++++++++-- > arch/x86/xen/xen-ops.h | 2 + > 2 files changed, 182 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c > index 0dda131..a0af554 100644 > --- a/arch/x86/xen/setup.c > +++ b/arch/x86/xen/setup.c > @@ -59,6 +59,20 @@ static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY; > static unsigned long xen_remap_pfn; > static unsigned long xen_max_pfn; > > +/* > + * Areas with memblock_reserve()d memory to be checked against final E820 map. > + * Each area has an associated function to handle conflicts (by either > + * removing the conflict or by just crashing with an appropriate message). > + * The array has a fixed size as there are only few areas of interest which are > + * well known: kernel, page tables, p2m, initrd. > + */ > +#define XEN_N_RESERVED_AREAS 4 > +static struct { > + phys_addr_t start; > + phys_addr_t size; > + int (*func)(phys_addr_t start, phys_addr_t size); > +} xen_reserved_area[XEN_N_RESERVED_AREAS] __initdata; > + > /* > * The maximum amount of extra memory compared to the base size. The > * main scaling factor is the size of struct page. At extreme ratios > @@ -365,10 +379,10 @@ static void __init xen_set_identity_and_remap_chunk(unsigned long start_pfn, > unsigned long end_pfn, unsigned long *released, unsigned long *remapped) > { > unsigned long pfn; > - unsigned long i = 0; > + unsigned long i; > unsigned long n = end_pfn - start_pfn; > > - while (i < n) { > + for (i = 0; i < n; ) { > unsigned long cur_pfn = start_pfn + i; > unsigned long left = n - i; > unsigned long size = left; > @@ -411,6 +425,53 @@ static void __init xen_set_identity_and_remap_chunk(unsigned long start_pfn, > (unsigned long)__va(pfn << PAGE_SHIFT), > mfn_pte(pfn, PAGE_KERNEL_IO), 0); > } > +/* Check to be remapped memory area for conflicts with reserved areas. > + * > + * Skip regions known to be reserved which are handled later. For these > + * regions we have to increase the remapped counter in order to reserve > + * extra memory space. > + * > + * In case a memory page already in use is to be remapped, just BUG(). > + */ > +static void __init xen_set_identity_and_remap_chunk_chk(unsigned long start_pfn, > + unsigned long end_pfn, unsigned long *released, unsigned long *remapped) ...remap_chunk_checked() ? > +{ > + unsigned long pfn; > + unsigned long area_start, area_end; > + unsigned i; > + > + for (i = 0; i < XEN_N_RESERVED_AREAS; i++) { > + > + if (!xen_reserved_area[i].size) > + break; > + > + area_start = PFN_DOWN(xen_reserved_area[i].start); > + area_end = PFN_UP(xen_reserved_area[i].start + > + xen_reserved_area[i].size); > + if (area_start >= end_pfn || area_end <= start_pfn) > + continue; > + > + if (area_start > start_pfn) > + xen_set_identity_and_remap_chunk(start_pfn, area_start, > + released, remapped); > + > + if (area_end < end_pfn) > + xen_set_identity_and_remap_chunk(area_end, end_pfn, > + released, remapped); > + > + *remapped += min(area_end, end_pfn) - > + max(area_start, start_pfn); > + > + return; Why not defer the whole chunk that conflicts? Or for that matter defer all this remapping to the last minute. David -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/