Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp954886ybm; Wed, 27 May 2020 12:05:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzKm+pXUYOziqHkyta5S1Yzzzjmne9OPn3Xw8/gNtyRVT6Fm5ly4wNEhiyN8K1mumqSIoLp X-Received: by 2002:a05:6402:206f:: with SMTP id bd15mr26719255edb.3.1590606349212; Wed, 27 May 2020 12:05:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590606349; cv=none; d=google.com; s=arc-20160816; b=Fy/sepN1LeBasXBGY0p2Raw1wM4fpT0Ic4AJBsQnpw8ECGXWpWbp+HjZJ4RQE5+zeZ MQOPdJDdPIrDp8Hx5TBaR1v7pr7hleSzJV38LYy85WBcBo10ADcDSMb1QWTsQFaJv5J1 wT1bD/iGNZXSJnIVd3h5BkOflyZTAIEbJwAl0G6aJxVcqNi0lamZ41+CMS7WRQsvgl+I CXT4r0jxMV1yYfVfx+RhgXs4HM0P1FjR/IOyEzi3J8MQFS2XjF5ZKm0hwPfflm3pmd1j faoA22lmYTO8CB2gDMhVUfgoRyZaV1P9yw2zOivqnKKTGd9DQQaEZFqAY3unTMPEemjh akLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version; bh=AQuabQyL0NgSQawHuCBbM06Uabm6iw+SZxZ1s5FLlmI=; b=popBHdebjLgwOcxwRYzxRHVfQxx/aDo6UYoeC7+4YSehHKZZ9AmKlIwhMJ3H9W/UQH iwm9NddBW2N3eY9sRZ4Y+q08Sdbzs+KiyBpUroJbB1c9s7LMRqR9GtPATTe5bcxDI+Ki FmMKVDn3gtan8lBZ937KSU6Gdd5zMzFQ3CWCnELk6GiCC7kIWMDFnDyUnh6tU3tFeGsj HDXi4FANL1Fprf4ZN6aeLwYXZOO5AmeW4ycDvcxABqZa6Rk1zar/Twdxb6JsQLiF0GV7 D31NjfekIa1Uo5KDsVwvu/saOZliZgR7CztE+hY3bG3q8hoaczwYldHdKTEQcNPcPbTf a4xQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id do10si3286701ejc.671.2020.05.27.12.05.26; Wed, 27 May 2020 12:05:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389195AbgE0P6Z (ORCPT + 98 others); Wed, 27 May 2020 11:58:25 -0400 Received: from mail-ot1-f68.google.com ([209.85.210.68]:35792 "EHLO mail-ot1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730443AbgE0P6Z (ORCPT ); Wed, 27 May 2020 11:58:25 -0400 Received: by mail-ot1-f68.google.com with SMTP id 69so19563657otv.2; Wed, 27 May 2020 08:58:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=AQuabQyL0NgSQawHuCBbM06Uabm6iw+SZxZ1s5FLlmI=; b=ADgKb0M1LMTLd2J7P5FDwQkUC4jZvTZSBEq9vaYT/2NoPMMavFSlnAdSTDX8Z0KpmD UlT28gPqLSx/0HmfsYKL9WUvHngIwzS3EARQyiYrw9Do2BADkUX/5pu0PONh3VEJ+n8i 9POT/agPfyqanMg35yrDAfCQrF4UBkaXxJPcwRRcxroeyuY8ftIYsT0DP+GicHabyV/k MF3faPCE86+yQndZTcEXePUCF/mwEd0H7BWBn0IWQFpuKSxv44RMZ8SOMuEvYpSReCum 70h20yUCi+unX80ARit3DnZGEV3f33HSDUtwK0phUoTMEYgc2kdJKSmZAuuvo6ynf/Nq JFVw== X-Gm-Message-State: AOAM533EJgYwIvbUpuG6rgmIqATM+qGjlMKN2rR/0Bi/DnO638edQl3f aSKISilHIZVCgColULKCiKJsFtUn45zsuqbRrT4= X-Received: by 2002:a9d:3d05:: with SMTP id a5mr5178288otc.262.1590595103734; Wed, 27 May 2020 08:58:23 -0700 (PDT) MIME-Version: 1.0 References: <20200519181410.GA1963@dumbo> <20200526161932.GD252930@magnolia> In-Reply-To: <20200526161932.GD252930@magnolia> From: "Rafael J. Wysocki" Date: Wed, 27 May 2020 17:58:12 +0200 Message-ID: Subject: Re: [PATCH v2] PM: hibernate: restrict writes to the resume device To: "Darrick J. Wong" , Domenico Andreoli Cc: "Rafael J. Wysocki" , Pavel Machek , Christoph Hellwig , Al Viro , "Ted Ts'o" , Len Brown , Linux PM , Linux Memory Management List , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 26, 2020 at 6:19 PM Darrick J. Wong wrote: > > On Mon, May 25, 2020 at 12:52:17PM +0200, Rafael J. Wysocki wrote: > > On Tue, May 19, 2020 at 8:14 PM Domenico Andreoli > > wrote: > > > > > > From: Domenico Andreoli > > > > > > Hibernation via snapshot device requires write permission to the swap > > > block device, the one that more often (but not necessarily) is used to > > > store the hibernation image. > > > > > > With this patch, such permissions are granted iff: > > > > > > 1) snapshot device config option is enabled > > > 2) swap partition is used as resume device > > > > > > In other circumstances the swap device is not writable from userspace. > > > > > > In order to achieve this, every write attempt to a swap device is > > > checked against the device configured as part of the uswsusp API [0] > > > using a pointer to the inode struct in memory. If the swap device being > > > written was not configured for resuming, the write request is denied. > > > > > > NOTE: this implementation works only for swap block devices, where the > > > inode configured by swapon (which sets S_SWAPFILE) is the same used > > > by SNAPSHOT_SET_SWAP_AREA. > > > > > > In case of swap file, SNAPSHOT_SET_SWAP_AREA indeed receives the inode > > > of the block device containing the filesystem where the swap file is > > > located (+ offset in it) which is never passed to swapon and then has > > > not set S_SWAPFILE. > > > > > > As result, the swap file itself (as a file) has never an option to be > > > written from userspace. Instead it remains writable if accessed directly > > > from the containing block device, which is always writeable from root. > > > > > > [0] Documentation/power/userland-swsusp.rst > > > > > > v2: > > > - rename is_hibernate_snapshot_dev() to is_hibernate_resume_dev() > > > - fix description so to correctly refer to the resume device > > > > > > Signed-off-by: Domenico Andreoli > > > Cc: "Rafael J. Wysocki" > > > Cc: Pavel Machek > > > Cc: Darrick J. Wong > > > Cc: Christoph Hellwig > > > Cc: viro@zeniv.linux.org.uk > > > Cc: tytso@mit.edu > > > Cc: len.brown@intel.com > > > Cc: linux-pm@vger.kernel.org > > > Cc: linux-mm@kvack.org > > > Cc: linux-xfs@vger.kernel.org > > > Cc: linux-fsdevel@vger.kernel.org > > > Cc: linux-kernel@vger.kernel.org > > > > > > --- > > > fs/block_dev.c | 3 +-- > > > include/linux/suspend.h | 6 ++++++ > > > kernel/power/user.c | 14 +++++++++++++- > > > 3 files changed, 20 insertions(+), 3 deletions(-) > > > > > > Index: b/include/linux/suspend.h > > > =================================================================== > > > --- a/include/linux/suspend.h > > > +++ b/include/linux/suspend.h > > > @@ -466,6 +466,12 @@ static inline bool system_entering_hiber > > > static inline bool hibernation_available(void) { return false; } > > > #endif /* CONFIG_HIBERNATION */ > > > > > > +#ifdef CONFIG_HIBERNATION_SNAPSHOT_DEV > > > +int is_hibernate_resume_dev(const struct inode *); > > > +#else > > > +static inline int is_hibernate_resume_dev(const struct inode *i) { return 0; } > > > +#endif > > > + > > > /* Hibernation and suspend events */ > > > #define PM_HIBERNATION_PREPARE 0x0001 /* Going to hibernate */ > > > #define PM_POST_HIBERNATION 0x0002 /* Hibernation finished */ > > > Index: b/kernel/power/user.c > > > =================================================================== > > > --- a/kernel/power/user.c > > > +++ b/kernel/power/user.c > > > @@ -35,8 +35,14 @@ static struct snapshot_data { > > > bool ready; > > > bool platform_support; > > > bool free_bitmaps; > > > + struct inode *bd_inode; > > > } snapshot_state; > > > > > > +int is_hibernate_resume_dev(const struct inode *bd_inode) > > > +{ > > > + return hibernation_available() && snapshot_state.bd_inode == bd_inode; > > > +} > > > + > > > static int snapshot_open(struct inode *inode, struct file *filp) > > > { > > > struct snapshot_data *data; > > > @@ -95,6 +101,7 @@ static int snapshot_open(struct inode *i > > > data->frozen = false; > > > data->ready = false; > > > data->platform_support = false; > > > + data->bd_inode = NULL; > > > > > > Unlock: > > > unlock_system_sleep(); > > > @@ -110,6 +117,7 @@ static int snapshot_release(struct inode > > > > > > swsusp_free(); > > > data = filp->private_data; > > > + data->bd_inode = NULL; > > > free_all_swap_pages(data->swap); > > > if (data->frozen) { > > > pm_restore_gfp_mask(); > > > @@ -202,6 +210,7 @@ struct compat_resume_swap_area { > > > static int snapshot_set_swap_area(struct snapshot_data *data, > > > void __user *argp) > > > { > > > + struct block_device *bdev; > > > sector_t offset; > > > dev_t swdev; > > > > > > @@ -232,9 +241,12 @@ static int snapshot_set_swap_area(struct > > > data->swap = -1; > > > return -EINVAL; > > > } > > > - data->swap = swap_type_of(swdev, offset, NULL); > > > + data->swap = swap_type_of(swdev, offset, &bdev); > > > if (data->swap < 0) > > > return -ENODEV; > > > + > > > + data->bd_inode = bdev->bd_inode; > > > + bdput(bdev); > > > return 0; > > > } > > > > > > Index: b/fs/block_dev.c > > > =================================================================== > > > --- a/fs/block_dev.c > > > +++ b/fs/block_dev.c > > > @@ -2023,8 +2023,7 @@ ssize_t blkdev_write_iter(struct kiocb * > > > if (bdev_read_only(I_BDEV(bd_inode))) > > > return -EPERM; > > > > > > - /* uswsusp needs write permission to the swap */ > > > - if (IS_SWAPFILE(bd_inode) && !hibernation_available()) > > > + if (IS_SWAPFILE(bd_inode) && !is_hibernate_resume_dev(bd_inode)) > > > return -ETXTBSY; > > > > > > if (!iov_iter_count(from)) > > > > > > -- > > > > The patch looks OK to me. > > > > Darrick, what do you think? > > Looks fine to me too. > > I kinda wonder how uswsusp prevents the bdev from being swapoff'd (or > just plain disappearing) such that bd_inode will never point to a > recycled inode, but I guess since we're only comparing pointer values > it's not a big deal for this patch... > > Acked-by: Darrick J. Wong Thanks! So the patch has been applied as 5.8 material. Cheers!