Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp348415iob; Fri, 13 May 2022 03:07:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxwTyoFg7fsWdazl/1FYL/eJHFeLG8vJvPTb1VIWrOrK7Hc9ssw10m5RMZIY+9jUlE4vEYn X-Received: by 2002:a17:907:3f86:b0:6df:ad43:583 with SMTP id hr6-20020a1709073f8600b006dfad430583mr3607290ejc.535.1652436429708; Fri, 13 May 2022 03:07:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652436429; cv=none; d=google.com; s=arc-20160816; b=VXPzUwh4zAScQkCOlWbPS3TwE0udMyqfR13v6Ee3OOPKPNh3BchHP4zfKPzxoING+C m1uOEFYBniBXHBC2AXyTq/S60xHAQxQbSBEwvNmgEP5AkpjpN6JPxm1G4XvlrO6iP5Iy 9NJAlIWUB1IaJITDzBnmelI2HJFrNoCiK/phjjjTEktbn4c8HDuUdmzGzcw8O5ref/D1 fHHobPvTNv0GSyMfZAWQW08IssnBtDZR9mlN/8bMHJO+GWJJlnNdjNxRynwNtQYEQuLJ 5Yw7/aue1QdaaxZ8DpeugUbYHbaxyyn6vI+EfRbMBpH/yTH35MACmSHttJdL4huk1rye 1lLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=IXavWz4RVwW+N7NqHtEfLphEvWnC63XDl89hHqWIYYM=; b=ffzIr/9sKovUFTDrZY5FXuH03yNE/p1eiUiAW/hhfJdMStoUHF79yZmvwSaiMa3FsE AiyvQrjXMqmz8cFnE+/ifCzucrvk9JIyFGUbNNWrGWrPAboBxwIGbB94xh7x/MYTPrQt PEUhfwQE2rRg/WmN8Hy/96FV68Bs4rMm3OPJPMypqZ0HoVuBwo7iX8fwzyUSC4I+qqvL k46iKwlt+f4OTUu/YtzwvOPlH13RG6CbfcGNEUkLrPOF/gg6tf9Ap5VanLU2qWK1jpNK J6xliXQDG7rfQ9HLgomyqbJjSiwp2cwazcK05Xp/uzKGYor7eRJv7TiHwOUbTsizHQxH iFeg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=X1EUyNzI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z9-20020a50e689000000b004281818148bsi1485085edm.240.2022.05.13.03.06.42; Fri, 13 May 2022 03:07:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=X1EUyNzI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358629AbiELVHQ (ORCPT + 99 others); Thu, 12 May 2022 17:07:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358599AbiELVHP (ORCPT ); Thu, 12 May 2022 17:07:15 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A61F5C65A; Thu, 12 May 2022 14:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=IXavWz4RVwW+N7NqHtEfLphEvWnC63XDl89hHqWIYYM=; b=X1EUyNzIPqRbThayuGlNzSMDo6 vWAYGXBSIROCH076Xjc27JeBIIAVWJZbeybapVoesv86Ud8mKmhNGIy0DwQdA0H2wiID24am85i/m QBfv3drxjQSk5FwvDabV//9wMITnx0hB+EVkcFK6B16WnjUj2oj9uX3GQdoGb51OG3fwZHBIZYpUT F/SO1YAfID4K2KV3Bo0KtvvvAWmVfUuItHGwoUzO+nu7G45B/7SVpqlCxwBh/U7Q7vbeutb1uoKLz BsnfBCNZYgVIlbjbMzb3gUgkW0MabgcolqZkwUTAzxQdWVu8zsss+4XL1k3PKCaEbREeUJD8qjsju 8THA1Jnw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1npG1d-006kSG-SZ; Thu, 12 May 2022 21:07:09 +0000 Date: Thu, 12 May 2022 22:07:09 +0100 From: Matthew Wilcox To: cgel.zte@gmail.com Cc: akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, corbet@lwn.net, xu xin , Yang Yang , Ran Xiaokai , wangyong , Yunkai Zhang Subject: Re: [PATCH v6] mm/ksm: introduce ksm_force for each process Message-ID: References: <20220510122242.1380536-1-xu.xin16@zte.com.cn> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220510122242.1380536-1-xu.xin16@zte.com.cn> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 10, 2022 at 12:22:42PM +0000, cgel.zte@gmail.com wrote: > +++ b/Documentation/admin-guide/mm/ksm.rst > @@ -32,7 +32,7 @@ are swapped back in: ksmd must rediscover their identity and merge again). > Controlling KSM with madvise > ============================ > > -KSM only operates on those areas of address space which an application > +KSM can operates on those areas of address space which an application "can operate on" > +static ssize_t ksm_force_write(struct file *file, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct task_struct *task; > + struct mm_struct *mm; > + char buffer[PROC_NUMBUF]; > + int force; > + int err = 0; > + > + memset(buffer, 0, sizeof(buffer)); > + if (count > sizeof(buffer) - 1) > + count = sizeof(buffer) - 1; > + if (copy_from_user(buffer, buf, count)) > + return -EFAULT; > + > + err = kstrtoint(strstrip(buffer), 0, &force); > + if (err) > + return err; > + > + if (force != 0 && force != 1) > + return -EINVAL; > + > + task = get_proc_task(file_inode(file)); > + if (!task) > + return -ESRCH; > + > + mm = get_task_mm(task); > + if (!mm) > + goto out_put_task; > + > + if (mm->ksm_force != force) { > + if (mmap_write_lock_killable(mm)) { > + err = -EINTR; > + goto out_mmput; > + } > + > + if (force == 0) > + mm->ksm_force = force; > + else { > + /* > + * Force anonymous pages of this mm to be involved in KSM merging > + * without explicitly calling madvise. > + */ > + if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) > + err = __ksm_enter(mm); > + if (!err) > + mm->ksm_force = force; > + } > + > + mmap_write_unlock(mm); > + } There's a much simpler patch hiding inside this complicated one. if (force) { set_bit(MMF_VM_MERGEABLE, &mm->flags)); for each VMA set VM_MERGEABLE; err = __ksm_enter(mm); } else { clear_bit(MMF_VM_MERGEABLE, &mm->flags)); for each VMA clear VM_MERGEABLE; } ... and all the extra complications you added go away.