Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp119451ybi; Fri, 26 Jul 2019 07:04:45 -0700 (PDT) X-Google-Smtp-Source: APXvYqyPW1FVaKAreJUrX2jsjBLHxHbRgN5c/f8cIivOEXvf0Cu9gVRK1qzYVzCBsabflnCcVfsq X-Received: by 2002:a17:902:e20c:: with SMTP id ce12mr99704022plb.130.1564149885719; Fri, 26 Jul 2019 07:04:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564149885; cv=none; d=google.com; s=arc-20160816; b=fPmAodGXP18CCX67blT/8DRdcnmyc3A1bm7pSqNfcgOEgn58o/znwRHMa+fjFogJMk TK71eyvar+++IgrRTbDo9kbOxgNAG7qD5Aai1t+4m8TpAD5Qz+3m9pBC0blV+eVAwtp6 WJsrHbSRtx+Ry5ARDuOPGA/oANj9n4heKwMBACn+irIcORiDPdT1fZTc6VT3FgUabp9S SY/4+m2d4KjGAGX6XfvbsqIOnurV9y8Hf5J1uPCZu3fMe0r+cJuM74tQ0ZUjnRo2GvrX Qg0q7MJSb7wOdZOFwNeXt8Tf6C5n4P4m1C0fT7s+2djvQD9ZjNtlp/kfAtKxqnvfhYeR ZFvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=stcUXmd1EWMweb01P1UeC7yCi2kg8YuWNOzdoxvJgRU=; b=oxx1AdR7sLJCMvGyGUDx1lhd6zqnPjPDqgX5UeNPKSQWSsUsAehW9602cyE7vCt4Ej p9Dqj2Ya/mJrEblzV1B3gFEm1HwNxaLDAKDT6+eP1S93I3WRrLuqYMkrfMB6WdUMsj7N I5PkDZ3nYjWAudb1CKyr/eoIyzU+v6kkUdms+JS+DYkJTDw0zaViTmP7Wyc1vV7aQgiB E6+auZJkl3WU1V/oS5B5yOLD2vwhcBW2ba6+vc7XfAFgcqzN8AEeNpZ4neG3rPKJ0J/T e4tx/wVTC1xJPXnGygT3M23K0OYX5QA3w3UgIHpQFcHJT6e8arYMnvYO2BMoUisad25Q h8hA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 2si3137878pla.22.2019.07.26.07.04.27; Fri, 26 Jul 2019 07:04:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388251AbfGZOAx (ORCPT + 99 others); Fri, 26 Jul 2019 10:00:53 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48040 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387598AbfGZOAu (ORCPT ); Fri, 26 Jul 2019 10:00:50 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 07A33C059B6F; Fri, 26 Jul 2019 14:00:49 +0000 (UTC) Received: from [10.72.12.238] (ovpn-12-238.pek2.redhat.com [10.72.12.238]) by smtp.corp.redhat.com (Postfix) with ESMTP id 894BC6062E; Fri, 26 Jul 2019 14:00:22 +0000 (UTC) Subject: Re: WARNING in __mmdrop To: "Michael S. Tsirkin" Cc: syzbot , aarcange@redhat.com, akpm@linux-foundation.org, christian@brauner.io, davem@davemloft.net, ebiederm@xmission.com, elena.reshetova@intel.com, guro@fb.com, hch@infradead.org, james.bottomley@hansenpartnership.com, jglisse@redhat.com, keescook@chromium.org, ldv@altlinux.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, luto@amacapital.net, mhocko@suse.com, mingo@kernel.org, namit@vmware.com, peterz@infradead.org, syzkaller-bugs@googlegroups.com, viro@zeniv.linux.org.uk, wad@chromium.org References: <20190725012149-mutt-send-email-mst@kernel.org> <55e8930c-2695-365f-a07b-3ad169654d28@redhat.com> <20190725042651-mutt-send-email-mst@kernel.org> <84bb2e31-0606-adff-cf2a-e1878225a847@redhat.com> <20190725092332-mutt-send-email-mst@kernel.org> <11802a8a-ce41-f427-63d5-b6a4cf96bb3f@redhat.com> <20190726074644-mutt-send-email-mst@kernel.org> <5cc94f15-b229-a290-55f3-8295266edb2b@redhat.com> <20190726082837-mutt-send-email-mst@kernel.org> <20190726094353-mutt-send-email-mst@kernel.org> From: Jason Wang Message-ID: <63754251-a39a-1e0e-952d-658102682094@redhat.com> Date: Fri, 26 Jul 2019 22:00:20 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190726094353-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Fri, 26 Jul 2019 14:00:49 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/7/26 下午9:47, Michael S. Tsirkin wrote: > On Fri, Jul 26, 2019 at 08:53:18PM +0800, Jason Wang wrote: >> On 2019/7/26 下午8:38, Michael S. Tsirkin wrote: >>> On Fri, Jul 26, 2019 at 08:00:58PM +0800, Jason Wang wrote: >>>> On 2019/7/26 下午7:49, Michael S. Tsirkin wrote: >>>>> On Thu, Jul 25, 2019 at 10:25:25PM +0800, Jason Wang wrote: >>>>>> On 2019/7/25 下午9:26, Michael S. Tsirkin wrote: >>>>>>>> Exactly, and that's the reason actually I use synchronize_rcu() there. >>>>>>>> >>>>>>>> So the concern is still the possible synchronize_expedited()? >>>>>>> I think synchronize_srcu_expedited. >>>>>>> >>>>>>> synchronize_expedited sends lots of IPI and is bad for realtime VMs. >>>>>>> >>>>>>>> Can I do this >>>>>>>> on through another series on top of the incoming V2? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>> The question is this: is this still a gain if we switch to the >>>>>>> more expensive srcu? If yes then we can keep the feature on, >>>>>> I think we only care about the cost on srcu_read_lock() which looks pretty >>>>>> tiny form my point of view. Which is basically a READ_ONCE() + WRITE_ONCE(). >>>>>> >>>>>> Of course I can benchmark to see the difference. >>>>>> >>>>>> >>>>>>> if not we'll put it off until next release and think >>>>>>> of better solutions. rcu->srcu is just a find and replace, >>>>>>> don't see why we need to defer that. can be a separate patch >>>>>>> for sure, but we need to know how well it works. >>>>>> I think I get here, let me try to do that in V2 and let's see the numbers. >>>>>> >>>>>> Thanks >>>> It looks to me for tree rcu, its srcu_read_lock() have a mb() which is too >>>> expensive for us. >>> I will try to ponder using vq lock in some way. >>> Maybe with trylock somehow ... >> >> Ok, let me retry if necessary (but I do remember I end up with deadlocks >> last try). >> >> >>> >>>> If we just worry about the IPI, >>> With synchronize_rcu what I would worry about is that guest is stalled >> >> Can this synchronize_rcu() be triggered by guest? If yes, there are several >> other MMU notifiers that can block. Is vhost something special here? > Sorry, let me explain: guests (and tasks in general) > can trigger activity that will > make synchronize_rcu take a long time. Yes, I get this. > Thus blocking > an mmu notifier until synchronize_rcu finishes > is a bad idea. The question is, MMU notifier are allowed to be blocked on invalidate_range_start() which could be much slower than synchronize_rcu() to finish. Looking at amdgpu_mn_invalidate_range_start_gfx() which calls amdgpu_mn_invalidate_node() which did:                 r = reservation_object_wait_timeout_rcu(bo->tbo.resv,                         true, false, MAX_SCHEDULE_TIMEOUT); ... >>> because system is busy because of other guests. >>> With expedited it's the IPIs... >>> >> The current synchronize_rcu()  can force a expedited grace period: >> >> void synchronize_rcu(void) >> { >>         ... >>         if (rcu_blocking_is_gp()) >> return; >>         if (rcu_gp_is_expedited()) >> synchronize_rcu_expedited(); >> else >> wait_rcu_gp(call_rcu); >> } >> EXPORT_SYMBOL_GPL(synchronize_rcu); > > An admin can force rcu to finish faster, trading > interrupts for responsiveness. Yes, so when set, all each synchronize_rcu() will go for synchronize_rcu_expedited(). > >>>> can we do something like in >>>> vhost_invalidate_vq_start()? >>>> >>>>         if (map) { >>>>                 /* In order to avoid possible IPIs with >>>>                  * synchronize_rcu_expedited() we use call_rcu() + >>>>                  * completion. >>>> */ >>>> init_completion(&c.completion); >>>>                 call_rcu(&c.rcu_head, vhost_finish_vq_invalidation); >>>> wait_for_completion(&c.completion); >>>>                 vhost_set_map_dirty(vq, map, index); >>>> vhost_map_unprefetch(map); >>>>         } >>>> >>>> ? >>> Why would that be faster than synchronize_rcu? >> >> No faster but no IPI. >> > Sorry I still don't see the point. > synchronize_rcu doesn't normally do an IPI either. > Not the case of when rcu_expedited is set. This can just 100% make sure there's no IPI. >>> >>>>> There's one other thing that bothers me, and that is that >>>>> for large rings which are not physically contiguous >>>>> we don't implement the optimization. >>>>> >>>>> For sure, that can wait, but I think eventually we should >>>>> vmap large rings. >>>> Yes, worth to try. But using direct map has its own advantage: it can use >>>> hugepage that vmap can't >>>> >>>> Thanks >>> Sure, so we can do that for small rings. >> >> Yes, that's possible but should be done on top. >> >> Thanks > Absolutely. Need to fix up the bugs first. > Yes. Thanks