Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 522ABC433F5 for ; Mon, 10 Jan 2022 10:21:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243852AbiAJKVv (ORCPT ); Mon, 10 Jan 2022 05:21:51 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:43822 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239415AbiAJKVt (ORCPT ); Mon, 10 Jan 2022 05:21:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1641810109; x=1673346109; h=subject:to:cc:references:from:message-id:date: mime-version:in-reply-to:content-transfer-encoding; bh=cn8ybiiW41FpTma7MjwJcBGcMQ9qfeGxIA4V4UXizXc=; b=M6/N5HX2OeWT3QTR9viIbtoQysiN6IhmkeQNQ1/yq70/DeFMon0TE16g RN+iliiuUcTTVnCyq7mqVHwzPsDCvm9zHdOpIfryBKvS3WJ+SSnsj75Zz D5S1jhRwMWlG5y3o53Foy2SBYsKjG5lA/QRSPbiNx5SHM/SXCc8e13z/W c=; Received: from ironmsg-lv-alpha.qualcomm.com ([10.47.202.13]) by alexa-out.qualcomm.com with ESMTP; 10 Jan 2022 02:21:48 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-lv-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jan 2022 02:21:47 -0800 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Mon, 10 Jan 2022 02:21:47 -0800 Received: from [10.216.41.197] (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Mon, 10 Jan 2022 02:21:43 -0800 Subject: Re: [PATCH v3 RESEND] mm: shmem: implement POSIX_FADV_[WILL|DONT]NEED for shmem To: Mark Hemment CC: , Andrew Morton , "Matthew Wilcox (Oracle)" , , , , , , , , Charan Teja Reddy References: <1641488717-13865-1-git-send-email-quic_charante@quicinc.com> From: Charan Teja Kalla Message-ID: <2c66ba2e-1c65-3bdd-b91e-eb8391ec6dbf@quicinc.com> Date: Mon, 10 Jan 2022 15:51:39 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Thanks Mark for the review!! On 1/7/2022 5:40 PM, Mark Hemment wrote: > On Thu, 6 Jan 2022 at 17:06, Charan Teja Reddy > wrote: >> >> From: Charan Teja Reddy >> >> Currently fadvise(2) is supported only for the files that doesn't >> associated with noop_backing_dev_info thus for the files, like shmem, >> fadvise results into NOP. But then there is file_operations->fadvise() >> that lets the file systems to implement their own fadvise >> implementation. Use this support to implement some of the POSIX_FADV_XXX >> functionality for shmem files. >> >> [snip] > >> +static int shmem_fadvise_willneed(struct address_space *mapping, >> + pgoff_t start, pgoff_t long end) >> +{ >> + XA_STATE(xas, &mapping->i_pages, start); >> + struct page *page; >> + >> + rcu_read_lock(); >> + xas_for_each(&xas, page, end) { >> + if (!xa_is_value(page)) >> + continue; >> + xas_pause(&xas); >> + rcu_read_unlock(); >> + >> + page = shmem_read_mapping_page(mapping, xas.xa_index); >> + if (!IS_ERR(page)) >> + put_page(page); >> + >> + rcu_read_lock(); >> + if (need_resched()) { >> + xas_pause(&xas); >> + cond_resched_rcu(); >> + } >> + } >> + rcu_read_unlock(); >> + >> + return 0; > > I have a doubt on referencing xa_index after calling xas_pause(). > xas_pause() walks xa_index forward, so will not be the value expected > for the current page. Agree here. I should have the better test case to verify my changes. > Also, not necessary to re-call xas_pause() before cond_resched (it is > a no-op). In the event when CONFIG_DEBUG_ATOMIC_SLEEP is enabled users may still need to call the xas_pause(), as we are dropping the rcu lock. NO? static inline void cond_resched_rcu(void) { #if defined(CONFIG_DEBUG_ATOMIC_SLEEP) || !defined(CONFIG_PREEMPT_RCU) rcu_read_unlock(); cond_resched(); rcu_read_lock(); #endif } > Would be better to check need_resched() before > rcu_read_lock(). Okay, I can directly use cond_resched() if used before rcu_read_lock(). > > As this loop may call xas_pause() for most iterations, should consider > using xa_for_each() instead (I *think* - still getting up to speed > with XArray). Even the xarray documentation says that: If most entries found during a walk require you to call xas_pause(), the xa_for_each() iterator may be more appropriate. Since every value entry found in the xarray requires me to do the xas_pause(), I do agree that xa_for_each() is the appropriate call here. Will switch to this in the next spin. Waiting for further review comments on this patch. > > Mark >