Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1246428pxb; Wed, 20 Oct 2021 00:54:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw9ghMflqx1D+si5CTy/GLCMApWH7d/S48L5Gpzanc/2tSnnkLGKUGoMpBzvcsbrWdMISLy X-Received: by 2002:a17:906:a1da:: with SMTP id bx26mr44466159ejb.558.1634716458922; Wed, 20 Oct 2021 00:54:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634716458; cv=none; d=google.com; s=arc-20160816; b=wjKubmyp4/7ule7svr3seXRwImmsD8XaL5DkKtCqAft9zWE3flWE76jw4AX7RAczNg vCqP2BsN6V2XzrtiU3AwJ4kEpTyHE+OSjPljJjpRhAv2GZGPRHksPvVj4cB+43uvB7XF QW8TNLXq1vKhN15WDF16J6C8pMbWTdf79oY7pOH+oI50SqWIeuebR3o43vr2khJM+4Al CoFkXQv1HrOqoUumutkFFCJJJT/AcZfjlfiU+4F3q9JnRo50L2yNSYsZiYUqDMKrIDB8 ZhO2770XDvaXQP5g9TVy1VkftEJUkShl224IOIcbICKVsE+4soExq/MikJfvR0mzCeKf RCfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=TQQaFj8fX+xwT808brP3uydQHCEZPK2+8ewSFY35jX0=; b=h/tuHi2kbwUFeAANkgI4mg+ct6Z86YdxYRa6WV1VyoeBj/TwWOCbp3SgwhMg3gx7UZ H0RpGUA2twKRSv28elldy1FUtFZ+6TwibZscOtBwFBijOtweP+0PnCaKI3L+1MNWSq1J wcAhI+qqJGMye+x+GB8IAo1Yf1a5mGCtp/f28X5svZs2g0DCA/BztMWxK5u84MG6/4in F48GsylC+xdWrg3ViEkrEPBHsu8VlTPogZdtLYr8CH6wzmzbf9VEJInlDJedN6C9LIWm ZJq2p2fLNIiXdrPAsVTCez2NqA6rsJj/XbUirlrW3EYgy2Ii834ncUp5T4f6kJVe40qT KYsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="A72y/Yse"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id sb32si2563588ejc.587.2021.10.20.00.53.54; Wed, 20 Oct 2021 00:54:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="A72y/Yse"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229498AbhJTHwY (ORCPT + 99 others); Wed, 20 Oct 2021 03:52:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:36878 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229503AbhJTHwX (ORCPT ); Wed, 20 Oct 2021 03:52:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634716209; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=TQQaFj8fX+xwT808brP3uydQHCEZPK2+8ewSFY35jX0=; b=A72y/YseyYO1dH1tX4G7jh8+PFQlSDRyCc4ZjQOjIW92KoPNbBMF+SItcXklrb/sUKZMx+ PuHjBa35mOzNJMwFLifpF+Pq7VXuymJlD+fFSLba4T8e1K4TKUga2Rtud93D4J4CIiE8U3 d2O8IGOSFQAcjJMuM71eaSmKyV+FnwQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-577-R6b00IKwNyOSzIL1xEArLw-1; Wed, 20 Oct 2021 03:50:07 -0400 X-MC-Unique: R6b00IKwNyOSzIL1xEArLw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5FF61100B704; Wed, 20 Oct 2021 07:50:03 +0000 (UTC) Received: from T590 (ovpn-8-41.pek2.redhat.com [10.72.8.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6EBDD62AEE; Wed, 20 Oct 2021 07:49:43 +0000 (UTC) Date: Wed, 20 Oct 2021 15:49:38 +0800 From: Ming Lei To: Miroslav Benes Cc: Luis Chamberlain , Benjamin Herrenschmidt , Paul Mackerras , tj@kernel.org, gregkh@linuxfoundation.org, akpm@linux-foundation.org, minchan@kernel.org, jeyu@kernel.org, shuah@kernel.org, bvanassche@acm.org, dan.j.williams@intel.com, joe@perches.com, tglx@linutronix.de, keescook@chromium.org, rostedt@goodmis.org, linux-spdx@vger.kernel.org, linux-doc@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, ming.lei@redhat.com Subject: Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > On Tue, 19 Oct 2021, Ming Lei wrote: > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > random drivers. > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > Luis is trying to fix that. > > > > What is the proper support of the generic infrastructure? I am not > > familiar with livepatching's model(especially with module unload), you mean > > livepatching have to do the following way from sysfs: > > > > 1) during module exit: > > > > mutex_lock(lp_lock); > > kobject_put(lp_kobj); > > mutex_unlock(lp_lock); > > > > 2) show()/store() method of attributes of lp_kobj > > > > mutex_lock(lp_lock) > > ... > > mutex_unlock(lp_lock) > > Yes, this was exactly the case. We then reworked it a lot (see > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > now the call sequence is different. kobject_put() is basically offloaded > to a workqueue scheduled right from the store() method. Meaning that > Luis's work would probably not help us currently, but on the other hand > the issues with AA deadlock were one of the main drivers of the redesign > (if I remember correctly). There were other reasons too as the changelog > of the commit describes. > > So, from my perspective, if there was a way to easily synchronize between > a data cleanup from module_exit callback and sysfs/kernfs operations, it > could spare people many headaches. kobject_del() is supposed to do so, but you can't hold a shared lock which is required in show()/store() method. Once kobject_del() returns, no pending show()/store() any more. The question is that why one shared lock is required for livepatching to delete the kobject. What are you protecting when you delete one kobject? > > > IMO, the above usage simply caused AA deadlock. Even in Luis's patch > > 'zram: fix crashes with cpu hotplug multistate', new/same AA deadlock > > (hot_remove_store() vs. disksize_store() or reset_store()) is added > > because hot_remove_store() isn't called from module_exit(). > > > > Luis tries to delay unloading module until all show()/store() are done. But > > that can be obtained by the following way simply during module_exit(): > > > > kobject_del(lp_kobj); //all pending store()/show() from lp_kobj are done, > > //no new store()/show() can come after > > //kobject_del() returns > > mutex_lock(lp_lock); > > kobject_put(lp_kobj); > > mutex_unlock(lp_lock); > > kobject_del() already calls kobject_put(). Did you mean __kobject_del(). > That one is internal though. kobject_del() is counter-part of kobject_add(), and kobject_put() will call kobject_del() automatically() if it isn't deleted yet, but usually kobject_put() is for releasing the object only. It is more often to release kobject by calling kobject_del() and kobject_put(). > > > Or can you explain your requirement on kobject/module unload in a bit > > details? > > Does the above makes sense? I think now focus is the shared lock between kobject_del() and show()/store() of the kobject's attributes. Thanks, Ming