Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp8335607ybi; Tue, 23 Jul 2019 06:53:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqxCDj43mzPsC1a1ELhSbFJy9PsXfwkiHvabbUQGFieovs5onXBGLxX6ZnIGFSN7F/w7Jo2p X-Received: by 2002:a62:8344:: with SMTP id h65mr5921620pfe.85.1563890032805; Tue, 23 Jul 2019 06:53:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563890032; cv=none; d=google.com; s=arc-20160816; b=l3QMcOf0MPmwfx6UvZRXo4ixZqpAiuE/B52hk8s4wBtsZdmRTSNEhmNR20RprAeFLE j02hJf/zy/LyjySMup0qvGpb6RX4vxwel9mTZ/7KJoJgTFL/WmdJyZiyQzvIshEp6UD3 AMKrNVRuQMDVwxwvS1ZSg+k+5BdWt/HF1VvtS75+chRWXRVFRnoC7/RwO9hZsdurocav SJtAzttjzw5g7cokQUHOaA6JGcLdVuvdBHrQvf0jzoXMM2RGK+Era9hFkwGMDnRnf3E9 fQUkyCsmun+xham8f5bNjAxSEutyHDKMNCR2JSEp2XIeyPgj8vbLWS/R7bjsL7VPwuLp MZdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=QllcjTl7UsRk7hsG5EpIrv0If2dJcd9NYiph16M9jnM=; b=rhDVvwzm3LtZBl/OdMaqUTx92S7R3kgdaCsJ8z1RYYbkArLuSWrjG2XFh77M4ElVwu aB6CMOd28o/wtkk/4uDRAQ1VaOC4ykuzrFjbM5RFI4EBlBkJM9AyZ8O76fOCbaTlRjbp 4XrnlnZxTrNxxKqrA3JrzIanDYibuJ9Ref6C2fuCAqTLp714Bk6iz2xX73Vg8ctHEfNz Nvc3cumHSmiISygKFyY4QHJjTmZkwwB8oXH/NY6t5kPMwhGyZcfULya4oo5C5poM3X6l NxRgWwbwpjoiB63IyKWvsqb9L9VX3qf+/6KbmFv9tK1v66f7bjrBa5Qjy8iUnccAhQZK pL/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c127si15477629pfc.191.2019.07.23.06.53.26; Tue, 23 Jul 2019 06:53:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731865AbfGWFrT (ORCPT + 99 others); Tue, 23 Jul 2019 01:47:19 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56514 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725788AbfGWFrS (ORCPT ); Tue, 23 Jul 2019 01:47:18 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B380E30862BE; Tue, 23 Jul 2019 05:47:17 +0000 (UTC) Received: from [10.72.12.26] (ovpn-12-26.pek2.redhat.com [10.72.12.26]) by smtp.corp.redhat.com (Postfix) with ESMTP id C08CE5D9C5; Tue, 23 Jul 2019 05:47:04 +0000 (UTC) Subject: Re: WARNING in __mmdrop To: "Michael S. Tsirkin" Cc: syzbot , aarcange@redhat.com, akpm@linux-foundation.org, christian@brauner.io, davem@davemloft.net, ebiederm@xmission.com, elena.reshetova@intel.com, guro@fb.com, hch@infradead.org, james.bottomley@hansenpartnership.com, jglisse@redhat.com, keescook@chromium.org, ldv@altlinux.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, luto@amacapital.net, mhocko@suse.com, mingo@kernel.org, namit@vmware.com, peterz@infradead.org, syzkaller-bugs@googlegroups.com, viro@zeniv.linux.org.uk, wad@chromium.org References: <0000000000008dd6bb058e006938@google.com> <000000000000964b0d058e1a0483@google.com> <20190721044615-mutt-send-email-mst@kernel.org> <20190721081447-mutt-send-email-mst@kernel.org> <85dd00e2-37a6-72b7-5d5a-8bf46a3526cf@redhat.com> <20190722040230-mutt-send-email-mst@kernel.org> <4bd2ff78-6871-55f2-44dc-0982ffef3337@redhat.com> <20190723010019-mutt-send-email-mst@kernel.org> From: Jason Wang Message-ID: Date: Tue, 23 Jul 2019 13:47:04 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190723010019-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Tue, 23 Jul 2019 05:47:18 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/7/23 下午1:01, Michael S. Tsirkin wrote: > On Tue, Jul 23, 2019 at 12:01:40PM +0800, Jason Wang wrote: >> On 2019/7/22 下午4:08, Michael S. Tsirkin wrote: >>> On Mon, Jul 22, 2019 at 01:24:24PM +0800, Jason Wang wrote: >>>> On 2019/7/21 下午8:18, Michael S. Tsirkin wrote: >>>>> On Sun, Jul 21, 2019 at 06:02:52AM -0400, Michael S. Tsirkin wrote: >>>>>> On Sat, Jul 20, 2019 at 03:08:00AM -0700, syzbot wrote: >>>>>>> syzbot has bisected this bug to: >>>>>>> >>>>>>> commit 7f466032dc9e5a61217f22ea34b2df932786bbfc >>>>>>> Author: Jason Wang >>>>>>> Date: Fri May 24 08:12:18 2019 +0000 >>>>>>> >>>>>>> vhost: access vq metadata through kernel virtual address >>>>>>> >>>>>>> bisection log:https://syzkaller.appspot.com/x/bisect.txt?x=149a8a20600000 >>>>>>> start commit: 6d21a41b Add linux-next specific files for 20190718 >>>>>>> git tree: linux-next >>>>>>> final crash:https://syzkaller.appspot.com/x/report.txt?x=169a8a20600000 >>>>>>> console output:https://syzkaller.appspot.com/x/log.txt?x=129a8a20600000 >>>>>>> kernel config:https://syzkaller.appspot.com/x/.config?x=3430a151e1452331 >>>>>>> dashboard link:https://syzkaller.appspot.com/bug?extid=e58112d71f77113ddb7b >>>>>>> syz repro:https://syzkaller.appspot.com/x/repro.syz?x=10139e68600000 >>>>>>> >>>>>>> Reported-by:syzbot+e58112d71f77113ddb7b@syzkaller.appspotmail.com >>>>>>> Fixes: 7f466032dc9e ("vhost: access vq metadata through kernel virtual >>>>>>> address") >>>>>>> >>>>>>> For information about bisection process see:https://goo.gl/tpsmEJ#bisection >>>>>> OK I poked at this for a bit, I see several things that >>>>>> we need to fix, though I'm not yet sure it's the reason for >>>>>> the failures: >>>>>> >>>>>> >>>>>> 1. mmu_notifier_register shouldn't be called from vhost_vring_set_num_addr >>>>>> That's just a bad hack, in particular I don't think device >>>>>> mutex is taken and so poking at two VQs will corrupt >>>>>> memory. >>>>>> So what to do? How about a per vq notifier? >>>>>> Of course we also have synchronize_rcu >>>>>> in the notifier which is slow and is now going to be called twice. >>>>>> I think call_rcu would be more appropriate here. >>>>>> We then need rcu_barrier on module unload. >>>>>> OTOH if we make pages linear with map then we are good >>>>>> with kfree_rcu which is even nicer. >>>>>> >>>>>> 2. Doesn't map leak after vhost_map_unprefetch? >>>>>> And why does it poke at contents of the map? >>>>>> No one should use it right? >>>>>> >>>>>> 3. notifier unregister happens last in vhost_dev_cleanup, >>>>>> but register happens first. This looks wrong to me. >>>>>> >>>>>> 4. OK so we use the invalidate count to try and detect that >>>>>> some invalidate is in progress. >>>>>> I am not 100% sure why do we care. >>>>>> Assuming we do, uaddr can change between start and end >>>>>> and then the counter can get negative, or generally >>>>>> out of sync. >>>>>> >>>>>> So what to do about all this? >>>>>> I am inclined to say let's just drop the uaddr optimization >>>>>> for now. E.g. kvm invalidates unconditionally. >>>>>> 3 should be fixed independently. >>>>> Above implements this but is only build-tested. >>>>> Jason, pls take a look. If you like the approach feel >>>>> free to take it from here. >>>>> >>>>> One thing the below does not have is any kind of rate-limiting. >>>>> Given it's so easy to restart I'm thinking it makes sense >>>>> to add a generic infrastructure for this. >>>>> Can be a separate patch I guess. >>>> I don't get why must use kfree_rcu() instead of synchronize_rcu() here. >>> synchronize_rcu has very high latency on busy systems. >>> It is not something that should be used on a syscall path. >>> KVM had to switch to SRCU to keep it sane. >>> Otherwise one guest can trivially slow down another one. >> >> I think you mean the synchronize_rcu_expedited()? Rethink of the code, the >> synchronize_rcu() in ioctl() could be removed, since it was serialized with >> memory accessor. > > Really let's just use kfree_rcu. It's way cleaner: fire and forget. Looks not, you need rate limit the fire as you've figured out? And in fact, the synchronization is not even needed, does it help if I leave a comment to explain? > >> Btw, for kvm ioctl it still uses synchronize_rcu() in kvm_vcpu_ioctl(), >> (just a little bit more hard to trigger): > > AFAIK these never run in response to guest events. > So they can take very long and guests still won't crash. What if guest manages to escape to qemu? Thanks > > >>     case KVM_RUN: { >> ... >>         if (unlikely(oldpid != task_pid(current))) { >>             /* The thread running this VCPU changed. */ >>             struct pid *newpid; >> >>             r = kvm_arch_vcpu_run_pid_change(vcpu); >>             if (r) >>                 break; >> >>             newpid = get_task_pid(current, PIDTYPE_PID); >>             rcu_assign_pointer(vcpu->pid, newpid); >>             if (oldpid) >>                 synchronize_rcu(); >>             put_pid(oldpid); >>         } >> ... >>         break; >> >> >>>>> Signed-off-by: Michael S. Tsirkin >>>> Let me try to figure out the root cause then decide whether or not to go for >>>> this way. >>>> >>>> Thanks >>> The root cause of the crash is relevant, but we still need >>> to fix issues 1-4. >>> >>> More issues (my patch tries to fix them too): >>> >>> 5. page not dirtied when mappings are torn down outside >>> of invalidate callback >> >> Yes. >> >> >>> 6. potential cross-VM DOS by one guest keeping system busy >>> and increasing synchronize_rcu latency to the point where >>> another guest stars timing out and crashes >>> >>> >>> >> This will be addressed after I remove the synchronize_rcu() from ioctl path. >> >> Thanks