Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp7307848ybi; Mon, 22 Jul 2019 10:42:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqwOdsPWtVbCwXL66GzJgSqFiRetT2ofUxHSyt1BLwPqfgN+EXyMOmuXKMy0doLWekbOrsJ1 X-Received: by 2002:a17:90a:bf03:: with SMTP id c3mr74100249pjs.112.1563817354817; Mon, 22 Jul 2019 10:42:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563817354; cv=none; d=google.com; s=arc-20160816; b=FMiQpX/lXmrowCvsrjopxuf2nDkyJeVqltBru0xKfzYRjrnOrA8rs9zKkA/FFOHHE3 Z41KtM9BV+yF6/EMhN1UNZ+JRr07eN3wpj0wvqZ0oThsgLiXCRNjsltI6d81t7OOixIP p80c7IRr+DfF7t0a4yaAra89otlbLeLc8WH7f2TO9DkpHZkBNU9jKIXLr/evdNXg6MMV 6Id+WcCtXTFqqr83IuD18+J6LFjaaUXWvxV4/CFmS/m6nFr6l27oGLkCdtEfMkowq03o HdHaDgjKs6Wr/RB9k7eBmml9vQyVejvUc4QSno59bUG1rvw6ERuLyYXX8NcXt7aIpz4Z 0MyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=S2hK+h+EoI43cv30uqbj7+KxLPlCodqFcyazqEl7dso=; b=ULZvPqI7MsN0/I7pcB3sFzg8NkF6AYmozYoe6t3nLBeVTrGstbqpeXIZvsgF+hF53Q QZovuR7dC2QC6qKQ7aQmUw0RRiqMMd+H0LUKoMdw/BSIkkesFaIQXiMzkYB4ZkO03VyW ZXHl8VieGbhSMJY9j9VYfz89FZjIiifUFtPCMrgUUmPHnRyzIKcwPjf6TWhp0DrZAyu0 YqbihW5YeIkQX0OP78awiD1lRB0KeBM03zIx6UX+H99mhxyawIpoMIAd+wffvESBLSLr Y5xocdB7w8WQlGqgAvKNzK4lbr98BIw4fWPuXZE00IKiPyPrwoaW4DXhcf+mM04xkOjG 3rdg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t6si10641139pfe.231.2019.07.22.10.42.19; Mon, 22 Jul 2019 10:42:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730669AbfGVQNv (ORCPT + 99 others); Mon, 22 Jul 2019 12:13:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59380 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728972AbfGVQNv (ORCPT ); Mon, 22 Jul 2019 12:13:51 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BAFB030B8DE2; Mon, 22 Jul 2019 16:13:50 +0000 (UTC) Received: from redhat.com (ovpn-124-54.rdu2.redhat.com [10.10.124.54]) by smtp.corp.redhat.com (Postfix) with SMTP id AE2A260BEC; Mon, 22 Jul 2019 16:13:42 +0000 (UTC) Date: Mon, 22 Jul 2019 12:13:40 -0400 From: "Michael S. Tsirkin" To: "Paul E. McKenney" Cc: Joel Fernandes , Matthew Wilcox , aarcange@redhat.com, akpm@linux-foundation.org, christian@brauner.io, davem@davemloft.net, ebiederm@xmission.com, elena.reshetova@intel.com, guro@fb.com, hch@infradead.org, james.bottomley@hansenpartnership.com, jasowang@redhat.com, jglisse@redhat.com, keescook@chromium.org, ldv@altlinux.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, luto@amacapital.net, mhocko@suse.com, mingo@kernel.org, namit@vmware.com, peterz@infradead.org, syzkaller-bugs@googlegroups.com, viro@zeniv.linux.org.uk, wad@chromium.org Subject: Re: RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop) Message-ID: <20190722120011-mutt-send-email-mst@kernel.org> References: <0000000000008dd6bb058e006938@google.com> <000000000000964b0d058e1a0483@google.com> <20190721044615-mutt-send-email-mst@kernel.org> <20190721081933-mutt-send-email-mst@kernel.org> <20190721131725.GR14271@linux.ibm.com> <20190721210837.GC363@bombadil.infradead.org> <20190721233113.GV14271@linux.ibm.com> <20190722151439.GA247639@google.com> <20190722114612-mutt-send-email-mst@kernel.org> <20190722155534.GG14271@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190722155534.GG14271@linux.ibm.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Mon, 22 Jul 2019 16:13:51 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 22, 2019 at 08:55:34AM -0700, Paul E. McKenney wrote: > On Mon, Jul 22, 2019 at 11:47:24AM -0400, Michael S. Tsirkin wrote: > > On Mon, Jul 22, 2019 at 11:14:39AM -0400, Joel Fernandes wrote: > > > [snip] > > > > > Would it make sense to have call_rcu() check to see if there are many > > > > > outstanding requests on this CPU and if so process them before returning? > > > > > That would ensure that frequent callers usually ended up doing their > > > > > own processing. > > > > > > Other than what Paul already mentioned about deadlocks, I am not sure if this > > > would even work for all cases since call_rcu() has to wait for a grace > > > period. > > > > > > So, if the number of outstanding requests are higher than a certain amount, > > > then you *still* have to wait for some RCU configurations for the grace > > > period duration and cannot just execute the callback in-line. Did I miss > > > something? > > > > > > Can waiting in-line for a grace period duration be tolerated in the vhost case? > > > > > > thanks, > > > > > > - Joel > > > > No, but it has many other ways to recover (try again later, drop a > > packet, use a slower copy to/from user). > > True enough! And your idea of taking recovery action based on the number > of callbacks seems like a good one while we are getting RCU's callback > scheduling improved. > > By the way, was this a real problem that you could make happen on real > hardware? > If not, I would suggest just letting RCU get improved over > the next couple of releases. So basically use kfree_rcu but add a comment saying e.g. "WARNING: in the future callers of kfree_rcu might need to check that not too many callbacks get queued. In that case, we can disable the optimization, or recover in some other way. Watch this space." > If it is something that you actually made happen, please let me know > what (if anything) you need from me for your callback-counting EBUSY > scheme. > > Thanx, Paul If you mean kfree_rcu causing OOM then no, it's all theoretical. If you mean synchronize_rcu stalling to the point where guest will OOPs, then yes, that's not too hard to trigger.