Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp1537750img; Tue, 19 Mar 2019 09:43:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqzrxyTVwZMVbLqICNlQ/SUmJ20Y494U9YzVDdWJqOWWNacEc62JjnhKehwA/ykPFjILgGE+ X-Received: by 2002:a17:902:5a2:: with SMTP id f31mr3397395plf.119.1553013807252; Tue, 19 Mar 2019 09:43:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553013807; cv=none; d=google.com; s=arc-20160816; b=M+zfRZwNJwaygZbXQpcwNUW0B00CLm+l/VZ8S8KAlI2L2rfeLuLaR6+GT87Q79w61H gcXl660qMebVjDDThmjZwLoHrG3LfFRXy5zR5pA5OS9wZDm+n6dwlusUUcyvLabXNEhx Wt1lsUu+JqlX3eW5lQW3HAq/2BZOx0T8jqQzftX48cWlAV1xYqDFFN/UYMSN1fy56clK T4HLsn8VInibL3l38f6G3SuFRgXEkwlvDsYQ8FGxPbx4vD9huCA6EOF4mLPJiTJUH2W1 8g18Q2HLJxZPflw8CkOgUCQBK2y8MHcXglN31kaubWYd10t7uosb7+ahipbLIVJkX7Wc dn9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:in-reply-to:date :references:subject:cc:to:from; bh=iL0pXQQEWbX0vsXlbSIn90WUx6PrhQZpaGd2P6Wrolg=; b=bzKcSm0ErwtVz1l+XTiKkFs9ih+0l2YT0PA7XNpYYSxkEoNMrBpu+SWDAGUt09r9PC /LeCO4hyXRXr8t7CvMDJA1PwcrFPq04+L6bAb5S6K2U/SePD9VnFq59UR/jG1V50WMlk w4wZcexNGzer8YEY31HSK5yDsXJQ+7ot0cqOxMStELoem3sSfys5zhhI4SQWm3lOGqmk 7mYfAd4t+sPappKqetGWYNmgQnkF8tee791CvCePdGpivlbcBzKAtweJcG8c3VikJNfz jU3OQoRtv9eEsA7wKI2daOs1ML8hMI+DxE5MxJzUuOmJHEfIRbHZbRlQBNM81BGyZIRv rpiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n20si11556841pgb.78.2019.03.19.09.43.11; Tue, 19 Mar 2019 09:43:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727405AbfCSQm1 (ORCPT + 99 others); Tue, 19 Mar 2019 12:42:27 -0400 Received: from mx2.suse.de ([195.135.220.15]:55554 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727054AbfCSQm1 (ORCPT ); Tue, 19 Mar 2019 12:42:27 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id C4118AED8; Tue, 19 Mar 2019 16:42:25 +0000 (UTC) From: Luis Henriques To: "Yan\, Zheng" Cc: "Yan\, Zheng" , Sage Weil , Ilya Dryomov , ceph-devel , Linux Kernel Mailing List , Hendrik Peyerl Subject: Re: [PATCH v2 2/2] ceph: quota: fix quota subdir mounts References: <20190312142019.30936-1-lhenriques@suse.com> <20190312142019.30936-3-lhenriques@suse.com> Date: Tue, 19 Mar 2019 16:42:24 +0000 In-Reply-To: (Zheng Yan's message of "Mon, 18 Mar 2019 21:06:28 +0800") Message-ID: <8736nin7db.fsf@suse.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org "Yan, Zheng" writes: > On Tue, Mar 12, 2019 at 10:22 PM Luis Henriques wrote: ... >> +static struct inode *lookup_quotarealm_inode(struct ceph_mds_client *mdsc, >> + struct super_block *sb, >> + struct ceph_snap_realm *realm) >> +{ >> + struct inode *in; >> + >> + in = ceph_lookup_inode(sb, realm->ino); >> + if (IS_ERR(in)) { >> + pr_warn("Can't lookup inode %llx (err: %ld)\n", >> + realm->ino, PTR_ERR(in)); >> + return in; >> + } >> + >> + spin_lock(&mdsc->quotarealms_inodes_lock); >> + list_add(&ceph_inode(in)->i_quotarealms_inode_item, >> + &mdsc->quotarealms_inodes_list); >> + spin_unlock(&mdsc->quotarealms_inodes_lock); >> + > Multiple threads can call this function for the same inode at the same > time. need to handle this. Besides, client needs to record lookupino > error. Otherwise, client may repeatedly send useless request. Good point. So, the only way I see to fix this is to drop the mdsc->quotarealms_inodes_list and instead use an ordered list/tree of structs that would either point to the corresponding ceph inode or to NULL if there was an error in the lookup: struct ceph_realm_inode { u64 ino; struct ceph_inode_info *ci; spinlock_t lock; unsigned long timeout; } The 'timeout' field would be used to try to do the lookup again if the error occurred long time ago. The code would then create a new struct for the realm->ino (if one is not found in the mdsc list), lock it and do the lookupino; if there's a struct already on the list, it either means there's a lookupino in progress or there was an error in the last lookup. This sounds overly complicated so I may be missing the obvious simple fix. Any ideas? >> + spin_lock(&realm->inodes_with_caps_lock); >> + realm->inode = in; > > reply of lookup_ino should alreadly set realm->inode Yes, of course. This was silly. Cheers, -- Luis