Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757546Ab1FIMhD (ORCPT ); Thu, 9 Jun 2011 08:37:03 -0400 Received: from smtp.eu.citrix.com ([62.200.22.115]:24250 "EHLO SMTP.EU.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757445Ab1FIMg7 (ORCPT ); Thu, 9 Jun 2011 08:36:59 -0400 X-IronPort-AV: E=Sophos;i="4.65,341,1304294400"; d="scan'208";a="6188129" Date: Thu, 9 Jun 2011 13:40:12 +0100 From: Stefano Stabellini X-X-Sender: sstabellini@kaball-desktop To: Igor Mammedov CC: KAMEZAWA Hiroyuki , Hiroyuki Kamezawa , Michal Hocko , "linux-kernel@vger.kernel.org" , "balbir@linux.vnet.ibm.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , Paul Menage , Li Zefan , "containers@lists.linux-foundation.org" , Stefano Stabellini , Keir Fraser , Keir Fraser , , Tim Deegan Subject: Possible shadow bug (was: Re: [PATCH] memcg: do not expose uninitialized mem_cgroup_per_node to world) In-Reply-To: <4DF0801F.9050908@redhat.com> Message-ID: References: <1306925044-2828-1-git-send-email-imammedo@redhat.com> <20110601123913.GC4266@tiehlicka.suse.cz> <4DE6399C.8070802@redhat.com> <20110601134149.GD4266@tiehlicka.suse.cz> <4DE64F0C.3050203@redhat.com> <20110601152039.GG4266@tiehlicka.suse.cz> <4DE66BEB.7040502@redhat.com> <4DE8D50F.1090406@redhat.com> <4DEE26E7.2060201@redhat.com> <20110608123527.479e6991.kamezawa.hiroyu@jp.fujitsu.com> <4DF0801F.9050908@redhat.com> User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 11266 Lines: 268 CC'ing xen-devel and Tim. This is a comment from a previous email in the thread: > It most easily reproduced only on xen hvm 32bit guest under heavy vcpus > contention for real cpus resources (i.e. I had to overcommit cpus and > run several cpu hog tasks on host to make guest crash on reboot cycle). > And from last experiments, crash happens only on on hosts that doesn't > have hap feature or if hap is disabled in hypervisor. it makes me think that it is a shadow pagetables bug; see details below. You can find more details on it following this thread on the lkml. On Thu, 9 Jun 2011, Igor Mammedov wrote: > On 06/08/2011 05:35 AM, KAMEZAWA Hiroyuki wrote: > > On Tue, 07 Jun 2011 15:25:59 +0200 > > Igor Mammedov wrote: > > > >> Sorry for late reply, > >> > >> On 06/03/2011 03:00 PM, Hiroyuki Kamezawa wrote: > >>> 2011/6/3 Igor Mammedov: > >>>> On 06/02/2011 01:10 AM, Hiroyuki Kamezawa wrote: > >>>>>> pc = list_entry(list->prev, struct page_cgroup, lru); > >>>>> Hmm, I disagree your patch is a fix for mainline. At least, a cgroup > >>>>> before completion of > >>>>> create() is not populated to userland and you never be able to rmdir() > >>>>> it because you can't > >>>>> find it. > >>>>> > >>>>> > >>>>> >26: e8 7d 12 30 00 call 0x3012a8 > >>>>> >2b:* 8b 73 08 mov 0x8(%ebx),%esi<-- trapping > >>>>> instruction > >>>>> >2e: 8b 7c 24 24 mov 0x24(%esp),%edi > >>>>> >32: 8b 07 mov (%edi),%eax > >>>>> > >>>>> Hm, what is the call 0x3012a8 ? > >>>>> > >>>> pc = list_entry(list->prev, struct page_cgroup, lru); > >>>> if (busy == pc) { > >>>> list_move(&pc->lru, list); > >>>> busy = 0; > >>>> spin_unlock_irqrestore(&zone->lru_lock, flags); > >>>> continue; > >>>> } > >>>> spin_unlock_irqrestore(&zone->lru_lock, flags);<---- is > >>>> call 0x3012a8 > >>>> ret = mem_cgroup_move_parent(pc, mem, GFP_KERNEL); > >>>> > >>>> and mov 0x8(%ebx),%esi > >>>> is dereferencing of 'pc' in inlined mem_cgroup_move_parent > >>>> > >>> Ah, thank you for input..then panicd at accessing pc->page and "pc" > >>> was 0xfffffff4. > >>> it means list->prev was NULL. > >>> > >> yes, that's the case. > >>>> I've looked at vmcore once more and indeed there isn't any parallel task > >>>> that touches cgroups code path. > >>>> Will investigate if it is xen to blame for incorrect data in place. > >>>> > >>>> Thanks very much for your opinion. > >>> What curious to me is that the fact "list->prev" is NULL. > >>> I can see why you doubt the initialization code ....the list pointer never > >>> contains NULL once it's used.... > >>> it smells like memory corruption or some to me. If you have vmcore, > >>> what the problematic mem_cgroup_per_zone(node) contains ? > >> it has all zeros except for last field: > >> > >> crash> rd f3446a00 62 > >> f3446a00: 00000000 00000000 00000000 00000000 ................ > >> f3446a10: 00000000 00000000 00000000 00000000 ................ > >> f3446a20: 00000000 00000000 00000000 00000000 ................ > >> f3446a30: 00000000 00000000 00000000 00000000 ................ > >> f3446a40: 00000000 00000000 00000000 00000000 ................ > >> f3446a50: 00000000 00000000 00000000 00000000 ................ > >> f3446a60: 00000000 00000000 00000000 00000000 ................ > >> f3446a70: 00000000 00000000 f36ef800 f3446a7c ..........n.|jD. > >> f3446a80: f3446a7c f3446a84 f3446a84 f3446a8c |jD..jD..jD..jD. > >> f3446a90: f3446a8c f3446a94 f3446a94 f3446a9c .jD..jD..jD..jD. > >> f3446aa0: f3446a9c 00000000 00000000 00000000 .jD............. > >> f3446ab0: 00000000 00000000 00000000 00000000 ................ > >> f3446ac0: 00000000 00000000 00000000 00000000 ................ > >> f3446ad0: 00000000 00000000 00000000 00000000 ................ > >> f3446ae0: 00000000 00000000 00000000 00000000 ................ > >> f3446af0: 00000000 f36ef800 > >> > >> crash> struct mem_cgroup f36ef800 > >> struct mem_cgroup { > >> ... > >> info = { > >> nodeinfo = {0xf3446a00} > >> }, > >> ... > >> > >> It looks like a very targeted corruption of the first zone except of > >> the last field, while the second zone and the rest are perfectly > >> normal (i.e. have empty initialized lists). > >> > > Hmm, ok, thank you. Then, mem_cgroup_pre_zone[] was initialized once. > > In this kind of case, I tend to check slab header of memory object f3446a00, > > or check whether f3446a00 is an alive slab object or not. > It looks like f3446a00 alive/allocated object > > crash> kmem f3446a00 > CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE > f7000c80 size-512 512 2251 2616 327 4k > SLAB MEMORY TOTAL ALLOCATED FREE > f3da6540 f3446000 8 1 7 > FREE / [ALLOCATED] > [f3446a00] > > PAGE PHYSICAL MAPPING INDEX CNT FLAGS > c1fa58c0 33446000 0 70 1 2800080 > > > However I have a related crash that can lead to not initialized lists of > the first entry > (i.e. to what we see at f3446a00), debug kernel sometimes will crash at > alloc_mem_cgroup_per_zone_info: > > XXX: pn: f208dc00, phy: 3208dc00 > XXX: pn: f2e85a00, phy: 32e85a00 > BUG: unable to handle kernel paging request at 9b74e240 > IP: [] mem_cgroup_create0x+0xef/0x350 > *pdpt = 0000000033542001 *pde = 0000000000000000 > Oops: 0002 [#1] SMP > ... > > Pid: 1823, comm: libvirtd Tainted: G ---------------- T > (2.6.32.700565 #21) HVM domU > EIP: 0060:[] EFLAGS: 00210297 CPU: 3 > EIP is at mem_cgroup_create+0xef/0x350 > EAX: 9b74e240 EBX: f2e85a00 ECX: 00000001 EDX: 00000001 > ESI: a88c8840 EDI: a88c8840 EBP: f201deb4 ESP: f201de8c > DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 > Process libvirtd (pid: 1823, ti=f201c000 task=f3642ab0 task.ti=f201c000) > Stack: > c09579b2 f2e85a00 32e85a00 f3455800 00000000 f2e85a00 f2c14ac0 c0a5a820 > <0> fffffff4 f2c14ac0 f201def8 c049d3a7 00000000 00000000 00000000 000001ed > <0> f2c14ac8 f5fa4400 f24fe954 f3502000 f2c14e40 f24f5608 f3502010 f2c14ac0 > Call Trace: > [] cgroup_mkdir+0xf7/0x450 > [] vfs_mkdir+0x93/0xf0 > [] ? lookup_hash+0x27/0x30 > [] sys_mkdirat+0xde/0x100 > [] ? call_rcu_sched+0xd/0x10 > [] ? call_rcu+0x8/0x10 > [] ? __put_cred+0x2f/0x50 > [] ? sys_faccessat+0x14d/0x180 > [] ? filp_close+0x47/0x70 > [] sys_mkdir+0x20/0x30 > [] sysenter_do_call+0x12/0x28 > > > static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *mem, int node) > { > ... > memset(pn, 0, sizeof(*pn)); > > for (zone = 0; zone< MAX_NR_ZONES; zone++) { > mz =&pn->zoneinfo[zone]; > for_each_lru(l) > INIT_LIST_HEAD(&mz->lists[l]);<- crash here > mz->usage_in_excess = 0; > mz->on_tree = false; > mz->mem = mem; > } > ... > > > crash> dis 0xc080b93e 15 > 0xc080b93e: movl $0x0,-0x18(%ebp) > 0xc080b945: mov %esi,-0x1c(%ebp) > 0xc080b948: imul $0x7c,-0x18(%ebp),%edi > 0xc080b94c: xor %ecx,%ecx > 0xc080b94e: xor %edx,%edx > 0xc080b950: lea (%edi,%edx,8),%esi > 0xc080b953: add $0x1,%ecx > 0xc080b956: lea (%ebx,%esi,1),%eax > 0xc080b959: add $0x1,%edx > 0xc080b95c: cmp $0x5,%ecx > 0xc080b95f: mov %eax,(%ebx,%esi,1) > 0xc080b962: mov %eax,0x4(%eax) > 0xc080b965: jne 0xc080b950 > 0xc080b967: mov -0x14(%ebp),%eax > 0xc080b96a: movl $0x0,0x6c(%eax) > > EDI on the first iteration should be 0 however it is a88c8840 according to Oops > dump and looking at -0x18(%ebp) in core shows 0 as it should be: > > crash> x/xw 0xf201deb4-0x18 > 0xf201de9c: 0x00000000 > > so it looks like EDI is incorrectly restored by Xen or at the moment when 0xc080b948 > was executed -0x18(%ebp) had that weird value. > > It is possible that invalid EDI value and following > > 0xc080b950: lea (%edi,%edx,8),%esi > > lead to some > accessible page and writes > > 0xc080b95f: mov %eax,(%ebx,%esi,1) > 0xc080b962: mov %eax,0x4(%eax) > > silently go to that page. Than after init lists loop it uses correct pn offset from > -0x14(%ebp) and initialises the rest fields of structure on the correct page. > > mz->usage_in_excess = 0; > mz->on_tree = false; > mz->mem = mem; > > 0xc080b967: mov -0x14(%ebp),%eax<- > 0xc080b96a: movl $0x0,0x6c(%eax) > 0xc080b971: movl $0x0,0x70(%eax) > 0xc080b978: movb $0x0,0x74(%eax) > 0xc080b97c: mov -0x1c(%ebp),%edx > 0xc080b97f: mov %edx,0x78(%eax) > 0xc080b982: add $0x7c,%eax > 0xc080b985: addl $0x1,-0x18(%ebp) > 0xc080b989: cmpl $0x4,-0x18(%ebp) > 0xc080b98d: mov %eax,-0x14(%ebp) > 0xc080b990: jne 0xc080b948 > > which could lead to the 0-ed list entries of the first zone > and the originally reported Oops in mem_cgroup_force_empty. > Afterwards it looks like: > > 0xc080b985: addl $0x1,-0x18(%ebp) > > -0x18(%ebp) is read correctly and the rest of 3 mz entries are initialized as > expected. > > So question is why and how > 0xc080b948: imul $0x7c,-0x18(%ebp),%edi > may be screwed up > > PS: > However, memory search for the went astray writes of the first entry > i.e. sequesnce f3446a00 f3446a00 in a couple of vmcores didn't give > any positive results. > > > > Thanks, > > -Kame > >> PS: > >> It most easily reproduced only on xen hvm 32bit guest under heavy > >> vcpus contention for real cpus resources (i.e. I had to overcommit > >> cpus and run several cpu hog tasks on host to make guest crash on > >> reboot cycle). > >> And from last experiments, crash happens only on on hosts that > >> doesn't have hap feature or if hap is disabled in hypervisor. > >> > >>> Thanks, > >>> -Kame > >> > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/