Received: by 2002:a05:7412:3b8b:b0:fc:a2b0:25d7 with SMTP id nd11csp2172267rdb; Sun, 11 Feb 2024 15:20:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IFWkOUpCblbv8QkRGjQkYPZpDSuHYpIH+qUOYrhZY3AV3NT/A+YTAwQKPjXkpninxO8bm93 X-Received: by 2002:a0d:d102:0:b0:600:3a38:a815 with SMTP id t2-20020a0dd102000000b006003a38a815mr4668662ywd.35.1707693654839; Sun, 11 Feb 2024 15:20:54 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707693654; cv=pass; d=google.com; s=arc-20160816; b=P8hpV16Zm43rrEmVH+xUxlsxE5qa2CaFo0fGOIJjIJ/wm/Jw7ZMJuXTAslfLXwqpgH wdE7GAq49IW0EemCC8zQXhWwmTkZcyeOft0w9JK+oVQmo3W3QmspOrTeW93wkOC4ycc6 Q6VzHknxzSGCBeiu4rUNW2R5CrCQ+2+/SU6alzhwSJUpHxqdp38pGL+WvEUvmskz71Ps WiPivY44rKiX0AeYXcNiko/XXXN1xtFnGVxft34gTIxkmh9rAlgaNMNXYmbs6WrB2an2 nnc6VtnVDHIGZpJ0AI1y/UgD16Nlnubr5JdDtGx/Lqm0BonePiJvsRN/rrkoxiW6wgib NYGA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=NHPEfHVPA/97z4dPQ2uSzzUc4GF8XJeYUG5RpzjoI18=; fh=5MNd5cMZ4ybGNGn35gsXiWLyHoi9EZ+tl1KKNYvsFqA=; b=NkzqlUYgQuaIB1ip2t51TroVYccWNRlNBJi9Rc9NNfvKOa9eqflmLrz18MVfDnpbYa l27SgpDluqkiYPVqErVgrV2fkxglx2kbJL/V1qF1dN7Op0/2TGKH53GT7W9wiylTriMN X6K2mga6Bv5q0SQHBW+kZ6R2iNUbamD5ISUFPq53V14pTl7kNXbdWesgCT8OUgrMAQnw d+7bA0ipoNNRpBfWmp6959boXyIUkaDsGzSFdBBm794Kf1J18FjFhiZutgsB92IAMyaR gliH1piAwvCXc823CrRSE3Otg+CthIMaA3+aIshsKG1kaf6LrG7oV73OPfZCvM4JqdsA mm9g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@ionos.com header.s=google header.b=EAn22+L8; arc=pass (i=1 spf=pass spfdomain=ionos.com dkim=pass dkdomain=ionos.com dmarc=pass fromdomain=ionos.com); spf=pass (google.com: domain of linux-kernel+bounces-61006-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61006-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=ionos.com X-Forwarded-Encrypted: i=2; AJvYcCVzXLVnX0cFnVVXj6VpxyCJ10gGQ20reEFbnOjJSW4Kl20ub06R2NfGNzJx966e2592WlWDGkT1IoHtzNwz4vnKNwSNT9pQLKWQGHSK6Q== Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id h18-20020ac85852000000b0042c4567e386si7263169qth.491.2024.02.11.15.20.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Feb 2024 15:20:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-61006-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@ionos.com header.s=google header.b=EAn22+L8; arc=pass (i=1 spf=pass spfdomain=ionos.com dkim=pass dkdomain=ionos.com dmarc=pass fromdomain=ionos.com); spf=pass (google.com: domain of linux-kernel+bounces-61006-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61006-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=ionos.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 886551C20E5C for ; Sun, 11 Feb 2024 23:20:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 61F053A269; Sun, 11 Feb 2024 23:16:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ionos.com header.i=@ionos.com header.b="EAn22+L8" Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9B961E529 for ; Sun, 11 Feb 2024 23:15:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.43 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707693361; cv=none; b=HQAa8Af65R4B+gMpZNWuIN66r94WDobAAsQTtl9Kf0frgDifvI+lU0S32uvaHQWZ/vX2GwtQTfUri98+4eWUTSQbgRspjnYoPYXOvh9MG2xonyzLEmnNt92yT5eFIWgCqvApH2ZzAp3aVbkvF8aaqAUA3QQxYBReTw1wcYEXi8A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707693361; c=relaxed/simple; bh=WTlPj/PuFjUBdLisL73W1wtMR25ovc81WyfJPsXtRs4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PnKPz6ULz2Acpb6iYWDBca6rhfwOJRaShWdsDbNOMUfHnVvWSuIRu3GnmxHpJIAWfFhq2sKj+2PFr76N8f/OpxEzHG6+44eKQjPqotbGszWwxCDpBmv3FxcYSFjLhNtw5+qpTUobaCPVnfmDW8UUk1vZZo6LA5FWiQLbNdW1080= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ionos.com; spf=pass smtp.mailfrom=ionos.com; dkim=pass (2048-bit key) header.d=ionos.com header.i=@ionos.com header.b=EAn22+L8; arc=none smtp.client-ip=209.85.208.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ionos.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ionos.com Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-55a90a0a1a1so3705594a12.0 for ; Sun, 11 Feb 2024 15:15:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; t=1707693354; x=1708298154; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NHPEfHVPA/97z4dPQ2uSzzUc4GF8XJeYUG5RpzjoI18=; b=EAn22+L8mzXqvcfbJ4P9I5N44jADG1WwzVECtcLT4ogHmm5dMDieZMqAn5GBnGPgdb yVAZDPy+kUMR1uN/X6Xx4eEPqo3sP3oEw+57y3A5koJDJS0i9NRJakXhy0IU7XWGzKba KnEpn0j3R59k6UA94BiayPRY45QanVwPWosCcqNYMhrySAsNa69vVNvnn2GP0WqJy64t +tEu8uM84oaBbyOLH/D46mXnXsY0w2OJmHjxxKyiLuoFPvJm8LR11O9mPCwazs2rc0JZ agBM8wH9bhREWLk4BtEC+PEs9yEi3U/N8Lw9aWspip5rWue4+In2rL57NvYcsl3qYoVk GmvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707693354; x=1708298154; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NHPEfHVPA/97z4dPQ2uSzzUc4GF8XJeYUG5RpzjoI18=; b=CeF833eBKQhUJ4iBpzNTwqrTtUZFHUDLJxFOWsDDdaDKpsAfrgX597uQPDZRc9FyLB HI/J7OKz1QGfZ7qB2gaTXNkxrkd5QDyMvl4cUZlFvp81Ng5qQDTNt2A6YD6eHB2XbXbl 0sEXKXqKLM72aPa5/2h2NzUeeSyWj4xpb3sxRaCKwXnTImnRfcsCngiWtS7j55qSr9vR Pg86EO4PBda7gf6MQZK2lfQB0oQBqhlyV1Om2V9Fh8/KwVsqC7egUSfTSVQvdkAtp78m geKVMPPX30B0L+32iL/2RA/UP1zGR5bgUe85Z7XnFyqx64vLPliTad0zH+aAMai9Dvm3 e2cQ== X-Gm-Message-State: AOJu0YyPBNJweiTxsczXhqPBwISbGWdF++FJI6ppQvN1CpIfvNfVC0Cf jKc7fifkyPkGygAUtw6SBdP169578c6RovM3/p2aYo2rvurEqFlzVelUPp1+3lJvS4Ug9a7H59i 5 X-Received: by 2002:aa7:d8ce:0:b0:560:e6b1:f73 with SMTP id k14-20020aa7d8ce000000b00560e6b10f73mr3918727eds.16.1707693353951; Sun, 11 Feb 2024 15:15:53 -0800 (PST) Received: from raven.blarg.de (p200300dc6f267100023064fffe740809.dip0.t-ipconnect.de. [2003:dc:6f26:7100:230:64ff:fe74:809]) by smtp.gmail.com with ESMTPSA id et13-20020a056402378d00b0055d19c9daf2sm2170180edb.15.2024.02.11.15.15.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Feb 2024 15:15:53 -0800 (PST) From: Max Kellermann To: linux-kernel@vger.kernel.org Cc: Max Kellermann Subject: [PATCH v4 22/35] blkdev.h: move declarations to blkdev_types.h Date: Mon, 12 Feb 2024 00:15:05 +0100 Message-Id: <20240211231518.349442-23-max.kellermann@ionos.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240211231518.349442-1-max.kellermann@ionos.com> References: <20240211231518.349442-1-max.kellermann@ionos.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit By providing declarations in a lean header, we can reduce header dependencies. Signed-off-by: Max Kellermann --- include/linux/blkdev.h | 444 +------------------------------ include/linux/blkdev_types.h | 463 +++++++++++++++++++++++++++++++++ include/linux/cdrom.h | 2 +- include/linux/io_uring_types.h | 2 +- io_uring/io_uring.c | 1 + 5 files changed, 467 insertions(+), 445 deletions(-) create mode 100644 include/linux/blkdev_types.h diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 5cd0edada09d..6a07ced6099f 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -5,11 +5,9 @@ #ifndef _LINUX_BLKDEV_H #define _LINUX_BLKDEV_H -#include +#include #include // for inode_unhashed() #include -#include -#include #include #include #include @@ -45,173 +43,6 @@ extern const struct device_type disk_type; extern const struct device_type part_type; extern struct class block_class; -/* - * Maximum number of blkcg policies allowed to be registered concurrently. - * Defined here to simplify include dependency. - */ -#define BLKCG_MAX_POLS 6 - -#define DISK_MAX_PARTS 256 -#define DISK_NAME_LEN 32 - -#define PARTITION_META_INFO_VOLNAMELTH 64 -/* - * Enough for the string representation of any kind of UUID plus NULL. - * EFI UUID is 36 characters. MSDOS UUID is 11 characters. - */ -#define PARTITION_META_INFO_UUIDLTH (UUID_STRING_LEN + 1) - -struct partition_meta_info { - char uuid[PARTITION_META_INFO_UUIDLTH]; - u8 volname[PARTITION_META_INFO_VOLNAMELTH]; -}; - -/** - * DOC: genhd capability flags - * - * ``GENHD_FL_REMOVABLE``: indicates that the block device gives access to - * removable media. When set, the device remains present even when media is not - * inserted. Shall not be set for devices which are removed entirely when the - * media is removed. - * - * ``GENHD_FL_HIDDEN``: the block device is hidden; it doesn't produce events, - * doesn't appear in sysfs, and can't be opened from userspace or using - * blkdev_get*. Used for the underlying components of multipath devices. - * - * ``GENHD_FL_NO_PART``: partition support is disabled. The kernel will not - * scan for partitions from add_disk, and users can't add partitions manually. - * - */ -enum { - GENHD_FL_REMOVABLE = 1 << 0, - GENHD_FL_HIDDEN = 1 << 1, - GENHD_FL_NO_PART = 1 << 2, -}; - -enum { - DISK_EVENT_MEDIA_CHANGE = 1 << 0, /* media changed */ - DISK_EVENT_EJECT_REQUEST = 1 << 1, /* eject requested */ -}; - -enum { - /* Poll even if events_poll_msecs is unset */ - DISK_EVENT_FLAG_POLL = 1 << 0, - /* Forward events to udev */ - DISK_EVENT_FLAG_UEVENT = 1 << 1, - /* Block event polling when open for exclusive write */ - DISK_EVENT_FLAG_BLOCK_ON_EXCL_WRITE = 1 << 2, -}; - -struct disk_events; -struct badblocks; - -struct blk_integrity { - const struct blk_integrity_profile *profile; - unsigned char flags; - unsigned char tuple_size; - unsigned char interval_exp; - unsigned char tag_size; -}; - -typedef unsigned int __bitwise blk_mode_t; - -/* open for reading */ -#define BLK_OPEN_READ ((__force blk_mode_t)(1 << 0)) -/* open for writing */ -#define BLK_OPEN_WRITE ((__force blk_mode_t)(1 << 1)) -/* open exclusively (vs other exclusive openers */ -#define BLK_OPEN_EXCL ((__force blk_mode_t)(1 << 2)) -/* opened with O_NDELAY */ -#define BLK_OPEN_NDELAY ((__force blk_mode_t)(1 << 3)) -/* open for "writes" only for ioctls (specialy hack for floppy.c) */ -#define BLK_OPEN_WRITE_IOCTL ((__force blk_mode_t)(1 << 4)) -/* open is exclusive wrt all other BLK_OPEN_WRITE opens to the device */ -#define BLK_OPEN_RESTRICT_WRITES ((__force blk_mode_t)(1 << 5)) - -struct gendisk { - /* - * major/first_minor/minors should not be set by any new driver, the - * block core will take care of allocating them automatically. - */ - int major; - int first_minor; - int minors; - - char disk_name[DISK_NAME_LEN]; /* name of major driver */ - - unsigned short events; /* supported events */ - unsigned short event_flags; /* flags related to event processing */ - - struct xarray part_tbl; - struct block_device *part0; - - const struct block_device_operations *fops; - struct request_queue *queue; - void *private_data; - - struct bio_set bio_split; - - int flags; - unsigned long state; -#define GD_NEED_PART_SCAN 0 -#define GD_READ_ONLY 1 -#define GD_DEAD 2 -#define GD_NATIVE_CAPACITY 3 -#define GD_ADDED 4 -#define GD_SUPPRESS_PART_SCAN 5 -#define GD_OWNS_QUEUE 6 - - struct mutex open_mutex; /* open/close mutex */ - unsigned open_partitions; /* number of open partitions */ - - struct backing_dev_info *bdi; - struct kobject queue_kobj; /* the queue/ directory */ - struct kobject *slave_dir; -#ifdef CONFIG_BLOCK_HOLDER_DEPRECATED - struct list_head slave_bdevs; -#endif - struct timer_rand_state *random; - atomic_t sync_io; /* RAID */ - struct disk_events *ev; - -#ifdef CONFIG_BLK_DEV_ZONED - /* - * Zoned block device information for request dispatch control. - * nr_zones is the total number of zones of the device. This is always - * 0 for regular block devices. conv_zones_bitmap is a bitmap of nr_zones - * bits which indicates if a zone is conventional (bit set) or - * sequential (bit clear). seq_zones_wlock is a bitmap of nr_zones - * bits which indicates if a zone is write locked, that is, if a write - * request targeting the zone was dispatched. - * - * Reads of this information must be protected with blk_queue_enter() / - * blk_queue_exit(). Modifying this information is only allowed while - * no requests are being processed. See also blk_mq_freeze_queue() and - * blk_mq_unfreeze_queue(). - */ - unsigned int nr_zones; - unsigned int max_open_zones; - unsigned int max_active_zones; - unsigned long *conv_zones_bitmap; - unsigned long *seq_zones_wlock; -#endif /* CONFIG_BLK_DEV_ZONED */ - -#if IS_ENABLED(CONFIG_CDROM) - struct cdrom_device_info *cdi; -#endif - int node_id; - struct badblocks *bb; - struct lockdep_map lockdep_map; - u64 diskseq; - blk_mode_t open_mode; - - /* - * Independent sector access ranges. This is always NULL for - * devices that do not have multiple independent access ranges. - */ - struct blk_independent_access_ranges *ia_ranges; -}; - static inline bool disk_live(struct gendisk *disk) { return !inode_unhashed(disk->part0->bd_inode); @@ -260,57 +91,6 @@ static inline int blk_validate_block_size(unsigned long bsize) return 0; } -/* - * BLK_BOUNCE_NONE: never bounce (default) - * BLK_BOUNCE_HIGH: bounce all highmem pages - */ -enum blk_bounce { - BLK_BOUNCE_NONE, - BLK_BOUNCE_HIGH, -}; - -struct queue_limits { - enum blk_bounce bounce; - unsigned long seg_boundary_mask; - unsigned long virt_boundary_mask; - - unsigned int max_hw_sectors; - unsigned int max_dev_sectors; - unsigned int chunk_sectors; - unsigned int max_sectors; - unsigned int max_user_sectors; - unsigned int max_segment_size; - unsigned int physical_block_size; - unsigned int logical_block_size; - unsigned int alignment_offset; - unsigned int io_min; - unsigned int io_opt; - unsigned int max_discard_sectors; - unsigned int max_hw_discard_sectors; - unsigned int max_secure_erase_sectors; - unsigned int max_write_zeroes_sectors; - unsigned int max_zone_append_sectors; - unsigned int discard_granularity; - unsigned int discard_alignment; - unsigned int zone_write_granularity; - - unsigned short max_segments; - unsigned short max_integrity_segments; - unsigned short max_discard_segments; - - unsigned char misaligned; - unsigned char discard_misaligned; - unsigned char raid_partial_stripes_expensive; - bool zoned; - - /* - * Drivers that set dma_alignment to less than 511 must be prepared to - * handle individual bvec's that are not a multiple of a SECTOR_SIZE - * due to possible offsets. - */ - unsigned int dma_alignment; -}; - typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx, void *data); @@ -351,194 +131,6 @@ struct blk_independent_access_ranges { struct blk_independent_access_range ia_range[]; }; -struct request_queue { - /* - * The queue owner gets to use this for whatever they like. - * ll_rw_blk doesn't touch it. - */ - void *queuedata; - - struct elevator_queue *elevator; - - const struct blk_mq_ops *mq_ops; - - /* sw queues */ - struct blk_mq_ctx __percpu *queue_ctx; - - /* - * various queue flags, see QUEUE_* below - */ - unsigned long queue_flags; - - unsigned int rq_timeout; - - unsigned int queue_depth; - - refcount_t refs; - - /* hw dispatch queues */ - unsigned int nr_hw_queues; - struct xarray hctx_table; - - struct percpu_ref q_usage_counter; - - struct request *last_merge; - - spinlock_t queue_lock; - - int quiesce_depth; - - struct gendisk *disk; - - /* - * mq queue kobject - */ - struct kobject *mq_kobj; - - struct queue_limits limits; - -#ifdef CONFIG_BLK_DEV_INTEGRITY - struct blk_integrity integrity; -#endif /* CONFIG_BLK_DEV_INTEGRITY */ - -#ifdef CONFIG_PM - struct device *dev; - enum rpm_status rpm_status; -#endif - - /* - * Number of contexts that have called blk_set_pm_only(). If this - * counter is above zero then only RQF_PM requests are processed. - */ - atomic_t pm_only; - - struct blk_queue_stats *stats; - struct rq_qos *rq_qos; - struct mutex rq_qos_mutex; - - /* - * ida allocated id for this queue. Used to index queues from - * ioctx. - */ - int id; - - unsigned int dma_pad_mask; - - /* - * queue settings - */ - unsigned long nr_requests; /* Max # of requests */ - -#ifdef CONFIG_BLK_INLINE_ENCRYPTION - struct blk_crypto_profile *crypto_profile; - struct kobject *crypto_kobject; -#endif - - struct timer_list timeout; - struct work_struct timeout_work; - - atomic_t nr_active_requests_shared_tags; - - unsigned int required_elevator_features; - - struct blk_mq_tags *sched_shared_tags; - - struct list_head icq_list; -#ifdef CONFIG_BLK_CGROUP - DECLARE_BITMAP (blkcg_pols, BLKCG_MAX_POLS); - struct blkcg_gq *root_blkg; - struct list_head blkg_list; - struct mutex blkcg_mutex; -#endif - - int node; - - spinlock_t requeue_lock; - struct list_head requeue_list; - struct delayed_work requeue_work; - -#ifdef CONFIG_BLK_DEV_IO_TRACE - struct blk_trace __rcu *blk_trace; -#endif - /* - * for flush operations - */ - struct blk_flush_queue *fq; - struct list_head flush_list; - - struct mutex sysfs_lock; - struct mutex sysfs_dir_lock; - - /* - * for reusing dead hctx instance in case of updating - * nr_hw_queues - */ - struct list_head unused_hctx_list; - spinlock_t unused_hctx_lock; - - int mq_freeze_depth; - -#ifdef CONFIG_BLK_DEV_THROTTLING - /* Throttle data */ - struct throtl_data *td; -#endif - struct rcu_head rcu_head; - wait_queue_head_t mq_freeze_wq; - /* - * Protect concurrent access to q_usage_counter by - * percpu_ref_kill() and percpu_ref_reinit(). - */ - struct mutex mq_freeze_lock; - - struct blk_mq_tag_set *tag_set; - struct list_head tag_set_list; - - struct dentry *debugfs_dir; - struct dentry *sched_debugfs_dir; - struct dentry *rqos_debugfs_dir; - /* - * Serializes all debugfs metadata operations using the above dentries. - */ - struct mutex debugfs_mutex; - - bool mq_sysfs_init_done; -}; - -/* Keep blk_queue_flag_name[] in sync with the definitions below */ -#define QUEUE_FLAG_STOPPED 0 /* queue is stopped */ -#define QUEUE_FLAG_DYING 1 /* queue being torn down */ -#define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */ -#define QUEUE_FLAG_SAME_COMP 4 /* complete on same CPU-group */ -#define QUEUE_FLAG_FAIL_IO 5 /* fake timeout */ -#define QUEUE_FLAG_NONROT 6 /* non-rotational device (SSD) */ -#define QUEUE_FLAG_VIRT QUEUE_FLAG_NONROT /* paravirt device */ -#define QUEUE_FLAG_IO_STAT 7 /* do disk/partitions IO accounting */ -#define QUEUE_FLAG_NOXMERGES 9 /* No extended merges */ -#define QUEUE_FLAG_ADD_RANDOM 10 /* Contributes to random pool */ -#define QUEUE_FLAG_SYNCHRONOUS 11 /* always completes in submit context */ -#define QUEUE_FLAG_SAME_FORCE 12 /* force complete on same CPU */ -#define QUEUE_FLAG_HW_WC 13 /* Write back caching supported */ -#define QUEUE_FLAG_INIT_DONE 14 /* queue is initialized */ -#define QUEUE_FLAG_STABLE_WRITES 15 /* don't modify blks until WB is done */ -#define QUEUE_FLAG_POLL 16 /* IO polling enabled if set */ -#define QUEUE_FLAG_WC 17 /* Write back caching */ -#define QUEUE_FLAG_FUA 18 /* device supports FUA writes */ -#define QUEUE_FLAG_DAX 19 /* device supports DAX */ -#define QUEUE_FLAG_STATS 20 /* track IO start and completion times */ -#define QUEUE_FLAG_REGISTERED 22 /* queue has been registered to a disk */ -#define QUEUE_FLAG_QUIESCED 24 /* queue has been quiesced */ -#define QUEUE_FLAG_PCI_P2PDMA 25 /* device supports PCI p2p requests */ -#define QUEUE_FLAG_ZONE_RESETALL 26 /* supports Zone Reset All */ -#define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ -#define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ -#define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ -#define QUEUE_FLAG_SQ_SCHED 30 /* single queue style io dispatch */ -#define QUEUE_FLAG_SKIP_TAGSET_QUIESCE 31 /* quiesce_tagset skip the queue*/ - -#define QUEUE_FLAG_MQ_DEFAULT ((1UL << QUEUE_FLAG_IO_STAT) | \ - (1UL << QUEUE_FLAG_SAME_COMP) | \ - (1UL << QUEUE_FLAG_NOWAIT)) - void blk_queue_flag_set(unsigned int flag, struct request_queue *q); void blk_queue_flag_clear(unsigned int flag, struct request_queue *q); bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); @@ -921,32 +513,6 @@ extern void blk_put_queue(struct request_queue *); void blk_mark_disk_dead(struct gendisk *disk); #ifdef CONFIG_BLOCK -/* - * blk_plug permits building a queue of related requests by holding the I/O - * fragments for a short period. This allows merging of sequential requests - * into single larger request. As the requests are moved from a per-task list to - * the device's request_queue in a batch, this results in improved scalability - * as the lock contention for request_queue lock is reduced. - * - * It is ok not to disable preemption when adding the request to the plug list - * or when attempting a merge. For details, please see schedule() where - * blk_flush_plug() is called. - */ -struct blk_plug { - struct request *mq_list; /* blk-mq requests */ - - /* if ios_left is > 1, we can batch tag/rq allocations */ - struct request *cached_rq; - u64 cur_ktime; - unsigned short nr_ios; - - unsigned short rq_count; - - bool multiple_queues; - bool has_elevator; - - struct list_head cb_list; /* md requires an unplug callback */ -}; struct blk_plug_cb; typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool); @@ -983,8 +549,6 @@ static inline void blk_plug_invalidate_ts(struct task_struct *tsk) int blkdev_issue_flush(struct block_device *bdev); long nr_blockdev_pages(void); #else /* CONFIG_BLOCK */ -struct blk_plug { -}; static inline void blk_start_plug_nr_ios(struct blk_plug *plug, unsigned short nr_ios) @@ -1540,12 +1104,6 @@ static inline int early_lookup_bdev(const char *pathname, dev_t *dev) int bdev_freeze(struct block_device *bdev); int bdev_thaw(struct block_device *bdev); -struct io_comp_batch { - struct request *req_list; - bool need_ts; - void (*complete)(struct io_comp_batch *); -}; - #define DEFINE_IO_COMP_BATCH(name) struct io_comp_batch name = { } #endif /* _LINUX_BLKDEV_H */ diff --git a/include/linux/blkdev_types.h b/include/linux/blkdev_types.h new file mode 100644 index 000000000000..e53ff5d57e29 --- /dev/null +++ b/include/linux/blkdev_types.h @@ -0,0 +1,463 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_BLKDEV_TYPES_H +#define _LINUX_BLKDEV_TYPES_H + +#include +#include // for BITS_TO_LONGS(), needed by DECLARE_BITMAP() +#include +#include +#include +#include +#include +#include +#include + +/* + * Maximum number of blkcg policies allowed to be registered concurrently. + * Defined here to simplify include dependency. + */ +#define BLKCG_MAX_POLS 6 + +#define DISK_MAX_PARTS 256 +#define DISK_NAME_LEN 32 + +#define PARTITION_META_INFO_VOLNAMELTH 64 +/* + * Enough for the string representation of any kind of UUID plus NULL. + * EFI UUID is 36 characters. MSDOS UUID is 11 characters. + */ +#define PARTITION_META_INFO_UUIDLTH (UUID_STRING_LEN + 1) + +struct partition_meta_info { + char uuid[PARTITION_META_INFO_UUIDLTH]; + u8 volname[PARTITION_META_INFO_VOLNAMELTH]; +}; + +/** + * DOC: genhd capability flags + * + * ``GENHD_FL_REMOVABLE``: indicates that the block device gives access to + * removable media. When set, the device remains present even when media is not + * inserted. Shall not be set for devices which are removed entirely when the + * media is removed. + * + * ``GENHD_FL_HIDDEN``: the block device is hidden; it doesn't produce events, + * doesn't appear in sysfs, and can't be opened from userspace or using + * blkdev_get*. Used for the underlying components of multipath devices. + * + * ``GENHD_FL_NO_PART``: partition support is disabled. The kernel will not + * scan for partitions from add_disk, and users can't add partitions manually. + * + */ +enum { + GENHD_FL_REMOVABLE = 1 << 0, + GENHD_FL_HIDDEN = 1 << 1, + GENHD_FL_NO_PART = 1 << 2, +}; + +enum { + DISK_EVENT_MEDIA_CHANGE = 1 << 0, /* media changed */ + DISK_EVENT_EJECT_REQUEST = 1 << 1, /* eject requested */ +}; + +enum { + /* Poll even if events_poll_msecs is unset */ + DISK_EVENT_FLAG_POLL = 1 << 0, + /* Forward events to udev */ + DISK_EVENT_FLAG_UEVENT = 1 << 1, + /* Block event polling when open for exclusive write */ + DISK_EVENT_FLAG_BLOCK_ON_EXCL_WRITE = 1 << 2, +}; + +struct disk_events; +struct badblocks; + +struct blk_integrity { + const struct blk_integrity_profile *profile; + unsigned char flags; + unsigned char tuple_size; + unsigned char interval_exp; + unsigned char tag_size; +}; + +typedef unsigned int __bitwise blk_mode_t; + +/* open for reading */ +#define BLK_OPEN_READ ((__force blk_mode_t)(1 << 0)) +/* open for writing */ +#define BLK_OPEN_WRITE ((__force blk_mode_t)(1 << 1)) +/* open exclusively (vs other exclusive openers */ +#define BLK_OPEN_EXCL ((__force blk_mode_t)(1 << 2)) +/* opened with O_NDELAY */ +#define BLK_OPEN_NDELAY ((__force blk_mode_t)(1 << 3)) +/* open for "writes" only for ioctls (specialy hack for floppy.c) */ +#define BLK_OPEN_WRITE_IOCTL ((__force blk_mode_t)(1 << 4)) +/* open is exclusive wrt all other BLK_OPEN_WRITE opens to the device */ +#define BLK_OPEN_RESTRICT_WRITES ((__force blk_mode_t)(1 << 5)) + +struct gendisk { + /* + * major/first_minor/minors should not be set by any new driver, the + * block core will take care of allocating them automatically. + */ + int major; + int first_minor; + int minors; + + char disk_name[DISK_NAME_LEN]; /* name of major driver */ + + unsigned short events; /* supported events */ + unsigned short event_flags; /* flags related to event processing */ + + struct xarray part_tbl; + struct block_device *part0; + + const struct block_device_operations *fops; + struct request_queue *queue; + void *private_data; + + struct bio_set bio_split; + + int flags; + unsigned long state; +#define GD_NEED_PART_SCAN 0 +#define GD_READ_ONLY 1 +#define GD_DEAD 2 +#define GD_NATIVE_CAPACITY 3 +#define GD_ADDED 4 +#define GD_SUPPRESS_PART_SCAN 5 +#define GD_OWNS_QUEUE 6 + + struct mutex open_mutex; /* open/close mutex */ + unsigned open_partitions; /* number of open partitions */ + + struct backing_dev_info *bdi; + struct kobject queue_kobj; /* the queue/ directory */ + struct kobject *slave_dir; +#ifdef CONFIG_BLOCK_HOLDER_DEPRECATED + struct list_head slave_bdevs; +#endif + struct timer_rand_state *random; + atomic_t sync_io; /* RAID */ + struct disk_events *ev; + +#ifdef CONFIG_BLK_DEV_ZONED + /* + * Zoned block device information for request dispatch control. + * nr_zones is the total number of zones of the device. This is always + * 0 for regular block devices. conv_zones_bitmap is a bitmap of nr_zones + * bits which indicates if a zone is conventional (bit set) or + * sequential (bit clear). seq_zones_wlock is a bitmap of nr_zones + * bits which indicates if a zone is write locked, that is, if a write + * request targeting the zone was dispatched. + * + * Reads of this information must be protected with blk_queue_enter() / + * blk_queue_exit(). Modifying this information is only allowed while + * no requests are being processed. See also blk_mq_freeze_queue() and + * blk_mq_unfreeze_queue(). + */ + unsigned int nr_zones; + unsigned int max_open_zones; + unsigned int max_active_zones; + unsigned long *conv_zones_bitmap; + unsigned long *seq_zones_wlock; +#endif /* CONFIG_BLK_DEV_ZONED */ + +#if IS_ENABLED(CONFIG_CDROM) + struct cdrom_device_info *cdi; +#endif + int node_id; + struct badblocks *bb; + struct lockdep_map lockdep_map; + u64 diskseq; + blk_mode_t open_mode; + + /* + * Independent sector access ranges. This is always NULL for + * devices that do not have multiple independent access ranges. + */ + struct blk_independent_access_ranges *ia_ranges; +}; + +/* + * BLK_BOUNCE_NONE: never bounce (default) + * BLK_BOUNCE_HIGH: bounce all highmem pages + */ +enum blk_bounce { + BLK_BOUNCE_NONE, + BLK_BOUNCE_HIGH, +}; + +struct queue_limits { + enum blk_bounce bounce; + unsigned long seg_boundary_mask; + unsigned long virt_boundary_mask; + + unsigned int max_hw_sectors; + unsigned int max_dev_sectors; + unsigned int chunk_sectors; + unsigned int max_sectors; + unsigned int max_user_sectors; + unsigned int max_segment_size; + unsigned int physical_block_size; + unsigned int logical_block_size; + unsigned int alignment_offset; + unsigned int io_min; + unsigned int io_opt; + unsigned int max_discard_sectors; + unsigned int max_hw_discard_sectors; + unsigned int max_secure_erase_sectors; + unsigned int max_write_zeroes_sectors; + unsigned int max_zone_append_sectors; + unsigned int discard_granularity; + unsigned int discard_alignment; + unsigned int zone_write_granularity; + + unsigned short max_segments; + unsigned short max_integrity_segments; + unsigned short max_discard_segments; + + unsigned char misaligned; + unsigned char discard_misaligned; + unsigned char raid_partial_stripes_expensive; + bool zoned; + + /* + * Drivers that set dma_alignment to less than 511 must be prepared to + * handle individual bvec's that are not a multiple of a SECTOR_SIZE + * due to possible offsets. + */ + unsigned int dma_alignment; +}; + +struct request_queue { + /* + * The queue owner gets to use this for whatever they like. + * ll_rw_blk doesn't touch it. + */ + void *queuedata; + + struct elevator_queue *elevator; + + const struct blk_mq_ops *mq_ops; + + /* sw queues */ + struct blk_mq_ctx __percpu *queue_ctx; + + /* + * various queue flags, see QUEUE_* below + */ + unsigned long queue_flags; + + unsigned int rq_timeout; + + unsigned int queue_depth; + + refcount_t refs; + + /* hw dispatch queues */ + unsigned int nr_hw_queues; + struct xarray hctx_table; + + struct percpu_ref q_usage_counter; + + struct request *last_merge; + + spinlock_t queue_lock; + + int quiesce_depth; + + struct gendisk *disk; + + /* + * mq queue kobject + */ + struct kobject *mq_kobj; + + struct queue_limits limits; + +#ifdef CONFIG_BLK_DEV_INTEGRITY + struct blk_integrity integrity; +#endif /* CONFIG_BLK_DEV_INTEGRITY */ + +#ifdef CONFIG_PM + struct device *dev; + enum rpm_status rpm_status; +#endif + + /* + * Number of contexts that have called blk_set_pm_only(). If this + * counter is above zero then only RQF_PM requests are processed. + */ + atomic_t pm_only; + + struct blk_queue_stats *stats; + struct rq_qos *rq_qos; + struct mutex rq_qos_mutex; + + /* + * ida allocated id for this queue. Used to index queues from + * ioctx. + */ + int id; + + unsigned int dma_pad_mask; + + /* + * queue settings + */ + unsigned long nr_requests; /* Max # of requests */ + +#ifdef CONFIG_BLK_INLINE_ENCRYPTION + struct blk_crypto_profile *crypto_profile; + struct kobject *crypto_kobject; +#endif + + struct timer_list timeout; + struct work_struct timeout_work; + + atomic_t nr_active_requests_shared_tags; + + unsigned int required_elevator_features; + + struct blk_mq_tags *sched_shared_tags; + + struct list_head icq_list; +#ifdef CONFIG_BLK_CGROUP + DECLARE_BITMAP (blkcg_pols, BLKCG_MAX_POLS); + struct blkcg_gq *root_blkg; + struct list_head blkg_list; + struct mutex blkcg_mutex; +#endif + + int node; + + spinlock_t requeue_lock; + struct list_head requeue_list; + struct delayed_work requeue_work; + +#ifdef CONFIG_BLK_DEV_IO_TRACE + struct blk_trace __rcu *blk_trace; +#endif + /* + * for flush operations + */ + struct blk_flush_queue *fq; + struct list_head flush_list; + + struct mutex sysfs_lock; + struct mutex sysfs_dir_lock; + + /* + * for reusing dead hctx instance in case of updating + * nr_hw_queues + */ + struct list_head unused_hctx_list; + spinlock_t unused_hctx_lock; + + int mq_freeze_depth; + +#ifdef CONFIG_BLK_DEV_THROTTLING + /* Throttle data */ + struct throtl_data *td; +#endif + struct rcu_head rcu_head; + wait_queue_head_t mq_freeze_wq; + /* + * Protect concurrent access to q_usage_counter by + * percpu_ref_kill() and percpu_ref_reinit(). + */ + struct mutex mq_freeze_lock; + + struct blk_mq_tag_set *tag_set; + struct list_head tag_set_list; + + struct dentry *debugfs_dir; + struct dentry *sched_debugfs_dir; + struct dentry *rqos_debugfs_dir; + /* + * Serializes all debugfs metadata operations using the above dentries. + */ + struct mutex debugfs_mutex; + + bool mq_sysfs_init_done; +}; + +/* Keep blk_queue_flag_name[] in sync with the definitions below */ +#define QUEUE_FLAG_STOPPED 0 /* queue is stopped */ +#define QUEUE_FLAG_DYING 1 /* queue being torn down */ +#define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */ +#define QUEUE_FLAG_SAME_COMP 4 /* complete on same CPU-group */ +#define QUEUE_FLAG_FAIL_IO 5 /* fake timeout */ +#define QUEUE_FLAG_NONROT 6 /* non-rotational device (SSD) */ +#define QUEUE_FLAG_VIRT QUEUE_FLAG_NONROT /* paravirt device */ +#define QUEUE_FLAG_IO_STAT 7 /* do disk/partitions IO accounting */ +#define QUEUE_FLAG_NOXMERGES 9 /* No extended merges */ +#define QUEUE_FLAG_ADD_RANDOM 10 /* Contributes to random pool */ +#define QUEUE_FLAG_SYNCHRONOUS 11 /* always completes in submit context */ +#define QUEUE_FLAG_SAME_FORCE 12 /* force complete on same CPU */ +#define QUEUE_FLAG_HW_WC 13 /* Write back caching supported */ +#define QUEUE_FLAG_INIT_DONE 14 /* queue is initialized */ +#define QUEUE_FLAG_STABLE_WRITES 15 /* don't modify blks until WB is done */ +#define QUEUE_FLAG_POLL 16 /* IO polling enabled if set */ +#define QUEUE_FLAG_WC 17 /* Write back caching */ +#define QUEUE_FLAG_FUA 18 /* device supports FUA writes */ +#define QUEUE_FLAG_DAX 19 /* device supports DAX */ +#define QUEUE_FLAG_STATS 20 /* track IO start and completion times */ +#define QUEUE_FLAG_REGISTERED 22 /* queue has been registered to a disk */ +#define QUEUE_FLAG_QUIESCED 24 /* queue has been quiesced */ +#define QUEUE_FLAG_PCI_P2PDMA 25 /* device supports PCI p2p requests */ +#define QUEUE_FLAG_ZONE_RESETALL 26 /* supports Zone Reset All */ +#define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ +#define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ +#define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ +#define QUEUE_FLAG_SQ_SCHED 30 /* single queue style io dispatch */ +#define QUEUE_FLAG_SKIP_TAGSET_QUIESCE 31 /* quiesce_tagset skip the queue*/ + +#define QUEUE_FLAG_MQ_DEFAULT ((1UL << QUEUE_FLAG_IO_STAT) | \ + (1UL << QUEUE_FLAG_SAME_COMP) | \ + (1UL << QUEUE_FLAG_NOWAIT)) + +#ifdef CONFIG_BLOCK + +/* + * blk_plug permits building a queue of related requests by holding the I/O + * fragments for a short period. This allows merging of sequential requests + * into single larger request. As the requests are moved from a per-task list to + * the device's request_queue in a batch, this results in improved scalability + * as the lock contention for request_queue lock is reduced. + * + * It is ok not to disable preemption when adding the request to the plug list + * or when attempting a merge. For details, please see schedule() where + * blk_flush_plug() is called. + */ +struct blk_plug { + struct request *mq_list; /* blk-mq requests */ + + /* if ios_left is > 1, we can batch tag/rq allocations */ + struct request *cached_rq; + u64 cur_ktime; + unsigned short nr_ios; + + unsigned short rq_count; + + bool multiple_queues; + bool has_elevator; + + struct list_head cb_list; /* md requires an unplug callback */ +}; + +#else /* CONFIG_BLOCK */ + +struct blk_plug { +}; + +#endif /* CONFIG_BLOCK */ + +struct io_comp_batch { + struct request *req_list; + bool need_ts; + void (*complete)(struct io_comp_batch *); +}; + +#endif /* _LINUX_BLKDEV_TYPES_H */ diff --git a/include/linux/cdrom.h b/include/linux/cdrom.h index 98c6fd0b39b6..d99709e14ecb 100644 --- a/include/linux/cdrom.h +++ b/include/linux/cdrom.h @@ -13,7 +13,7 @@ #include /* not really needed, later.. */ #include -#include +#include #include #include diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 4fe7af8a4907..8dcf55fe0e09 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -1,7 +1,7 @@ #ifndef IO_URING_TYPES_H #define IO_URING_TYPES_H -#include +#include #include #include #include diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 4cc333203665..d111784bb406 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include -- 2.39.2