2022-06-14 07:21:38

by Imran Khan

[permalink] [raw]
Subject: [PATCH v6 0/4] kernfs: make ->attr.open RCU protected.

The patches in this version of the patch set are as follows:

PATCH-1: Make kernfs_open_node->attr.open RCU protected.

PATCH-2: Change kernfs_notify_list to llist.

PATCH-3: Introduce interface to access kernfs_open_file_mutex.

PATCH-4: Replace global kernfs_open_file_mutex with hashed mutexes.

Changes since v5:
- Use 2 helpers for ->attr.open dereferencing

Changes since v4:
- Rebase on tag next-20220610
- Use one helper for all ->attr.open dereferencing.

Changes since v3:
- Rebase on tag next-20220601
- Rename RCU access related helpers and update their description
- Address errors reported by kernel test robot
- Include Acked-by tags from Tejun for the acked patches (PATCH-2,3,4)

Changes since v2:
- Rebase on tag next-20220510
- Remove PATCH-1 of v2 because it is present in tag next-20220510
- Include Acked-by tags from Tejun for the acked patches (PATCH-2 and PATCH-3)


Cover letter for v2:
--------------------------------------------------------------------------

I have not yet received any feedback about v1 of this patchset [2] but
in the meantime an old version of first patch from [3] has been integrated in
linux-next. Functionally first patch in both [2] and [3] are identical.
It's just that [2] has renamed one of the functions to better reflect the fact
that we are no longer using reference counting for kernfs_open_node.

In this version, I have just modified first patch of v1 so that we use the
modified function name as done in [2] and avoid those parts that are already
present in linux-next now. The remaining 4 patches (PATCH-2 to PATCH-5) are
identical in both v1 and v2 albeit v2 has been rebased on tag next-20220503.

Changes since v1:
- Rebase on tag next-20220503

[2]: https://lore.kernel.org/lkml/[email protected]/
[3]: https://lore.kernel.org/lkml/[email protected]/

Original cover letter
-------------------------------------------------------

This patchset contains subset of patches (after addressing review comments)
discussed at [1]. Since [1] is replacing multiple global locks and since
each of these locks can be removed independently, it was decided that we
should make these changes in parts i.e first get one set of optimizations
integrated and then work on top of those further.

The patches in this change set introduce following changes:

PATCH-1: Remove reference counting for kernfs_open_node.

PATCH-2: Make kernfs_open_node->attr.open RCU protected.

PATCH-3: Change kernfs_notify_list to llist.

PATCH-4: Introduce interface to access kernfs_open_file_mutex.

PATCH-5: Replace global kernfs_open_file_mutex with hashed mutexes.

[1] https://lore.kernel.org/lkml/[email protected]/

----------------------------------------------------------------

Imran Khan (4):
kernfs: make ->attr.open RCU protected.
kernfs: Change kernfs_notify_list to llist.
kernfs: Introduce interface to access global kernfs_open_file_mutex.
kernfs: Replace global kernfs_open_file_mutex with hashed mutexes.

fs/kernfs/file.c | 249 ++++++++++++++++++++++--------------
fs/kernfs/kernfs-internal.h | 4 +
fs/kernfs/mount.c | 19 +++
include/linux/kernfs.h | 61 ++++++++-
4 files changed, 235 insertions(+), 98 deletions(-)


base-commit: 6d0c806803170f120f8cb97b321de7bd89d3a791
--
2.30.2


2022-06-14 07:48:33

by Imran Khan

[permalink] [raw]
Subject: [PATCH v6 3/4] kernfs: Introduce interface to access global kernfs_open_file_mutex.

This allows to change underlying mutex locking, without needing to change
the users of the lock. For example next patch modifies this interface to
use hashed mutexes in place of a single global kernfs_open_file_mutex.

Signed-off-by: Imran Khan <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
fs/kernfs/file.c | 56 ++++++++++++++++++++++++++++++++----------------
1 file changed, 38 insertions(+), 18 deletions(-)

diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index 77aeb0b6f992b..38fb71b2c671e 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -49,6 +49,22 @@ struct kernfs_open_node {

static LLIST_HEAD(kernfs_notify_list);

+static inline struct mutex *kernfs_open_file_mutex_ptr(struct kernfs_node *kn)
+{
+ return &kernfs_open_file_mutex;
+}
+
+static inline struct mutex *kernfs_open_file_mutex_lock(struct kernfs_node *kn)
+{
+ struct mutex *lock;
+
+ lock = kernfs_open_file_mutex_ptr(kn);
+
+ mutex_lock(lock);
+
+ return lock;
+}
+
/**
* kernfs_deref_open_node - Get kernfs_open_node corresponding to @kn.
*
@@ -79,9 +95,9 @@ kernfs_deref_open_node(struct kernfs_open_file *of, struct kernfs_node *kn)
* @kn: target kernfs_node.
*
* Fetch and return ->attr.open of @kn when caller holds the
- * kernfs_open_file_mutex.
+ * kernfs_open_file_mutex_ptr(kn).
*
- * Update of ->attr.open happens under kernfs_open_file_mutex. So when
+ * Update of ->attr.open happens under kernfs_open_file_mutex_ptr(kn). So when
* the caller guarantees that this mutex is being held, other updaters can't
* change ->attr.open and this means that we can safely deref ->attr.open
* outside RCU read-side critical section.
@@ -92,7 +108,7 @@ static struct kernfs_open_node *
kernfs_deref_open_node_protected(struct kernfs_node *kn)
{
return rcu_dereference_check(kn->attr.open,
- lockdep_is_held(&kernfs_open_file_mutex));
+ lockdep_is_held(kernfs_open_file_mutex_ptr(kn)));
}

static struct kernfs_open_file *kernfs_of(struct file *file)
@@ -575,19 +591,20 @@ static int kernfs_get_open_node(struct kernfs_node *kn,
struct kernfs_open_file *of)
{
struct kernfs_open_node *on, *new_on = NULL;
+ struct mutex *mutex = NULL;

- mutex_lock(&kernfs_open_file_mutex);
+ mutex = kernfs_open_file_mutex_lock(kn);
on = kernfs_deref_open_node_protected(kn);

if (on) {
list_add_tail(&of->list, &on->files);
- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
return 0;
} else {
/* not there, initialize a new one */
new_on = kmalloc(sizeof(*new_on), GFP_KERNEL);
if (!new_on) {
- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
return -ENOMEM;
}
atomic_set(&new_on->event, 1);
@@ -596,7 +613,7 @@ static int kernfs_get_open_node(struct kernfs_node *kn,
list_add_tail(&of->list, &new_on->files);
rcu_assign_pointer(kn->attr.open, new_on);
}
- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);

return 0;
}
@@ -618,12 +635,13 @@ static void kernfs_unlink_open_file(struct kernfs_node *kn,
struct kernfs_open_file *of)
{
struct kernfs_open_node *on;
+ struct mutex *mutex = NULL;

- mutex_lock(&kernfs_open_file_mutex);
+ mutex = kernfs_open_file_mutex_lock(kn);

on = kernfs_deref_open_node_protected(kn);
if (!on) {
- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
return;
}

@@ -635,7 +653,7 @@ static void kernfs_unlink_open_file(struct kernfs_node *kn,
kfree_rcu(on, rcu_head);
}

- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
}

static int kernfs_fop_open(struct inode *inode, struct file *file)
@@ -773,11 +791,11 @@ static void kernfs_release_file(struct kernfs_node *kn,
/*
* @of is guaranteed to have no other file operations in flight and
* we just want to synchronize release and drain paths.
- * @kernfs_open_file_mutex is enough. @of->mutex can't be used
+ * @kernfs_open_file_mutex_ptr(kn) is enough. @of->mutex can't be used
* here because drain path may be called from places which can
* cause circular dependency.
*/
- lockdep_assert_held(&kernfs_open_file_mutex);
+ lockdep_assert_held(kernfs_open_file_mutex_ptr(kn));

if (!of->released) {
/*
@@ -794,11 +812,12 @@ static int kernfs_fop_release(struct inode *inode, struct file *filp)
{
struct kernfs_node *kn = inode->i_private;
struct kernfs_open_file *of = kernfs_of(filp);
+ struct mutex *mutex = NULL;

if (kn->flags & KERNFS_HAS_RELEASE) {
- mutex_lock(&kernfs_open_file_mutex);
+ mutex = kernfs_open_file_mutex_lock(kn);
kernfs_release_file(kn, of);
- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
}

kernfs_unlink_open_file(kn, of);
@@ -813,6 +832,7 @@ void kernfs_drain_open_files(struct kernfs_node *kn)
{
struct kernfs_open_node *on;
struct kernfs_open_file *of;
+ struct mutex *mutex = NULL;

if (!(kn->flags & (KERNFS_HAS_MMAP | KERNFS_HAS_RELEASE)))
return;
@@ -822,16 +842,16 @@ void kernfs_drain_open_files(struct kernfs_node *kn)
* ->attr.open at this point of time. This check allows early bail out
* if ->attr.open is already NULL. kernfs_unlink_open_file makes
* ->attr.open NULL only while holding kernfs_open_file_mutex so below
- * check under kernfs_open_file_mutex will ensure bailing out if
+ * check under kernfs_open_file_mutex_ptr(kn) will ensure bailing out if
* ->attr.open became NULL while waiting for the mutex.
*/
if (!rcu_access_pointer(kn->attr.open))
return;

- mutex_lock(&kernfs_open_file_mutex);
+ mutex = kernfs_open_file_mutex_lock(kn);
on = kernfs_deref_open_node_protected(kn);
if (!on) {
- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
return;
}

@@ -845,7 +865,7 @@ void kernfs_drain_open_files(struct kernfs_node *kn)
kernfs_release_file(kn, of);
}

- mutex_unlock(&kernfs_open_file_mutex);
+ mutex_unlock(mutex);
}

/*
--
2.30.2

2022-06-14 07:50:42

by Imran Khan

[permalink] [raw]
Subject: [PATCH v6 1/4] kernfs: make ->attr.open RCU protected.

After removal of kernfs_open_node->refcnt in the previous patch,
kernfs_open_node_lock can be removed as well by making ->attr.open
RCU protected. kernfs_put_open_node can delegate freeing to ->attr.open
to RCU and other readers of ->attr.open can do so under rcu_read_(un)lock.

Suggested by: Al Viro <[email protected]>
Signed-off-by: Imran Khan <[email protected]>
---
fs/kernfs/file.c | 147 ++++++++++++++++++++++++++++-------------
include/linux/kernfs.h | 2 +-
2 files changed, 102 insertions(+), 47 deletions(-)

diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index e3abfa843879c..da70f00a59c17 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -23,16 +23,16 @@
* for each kernfs_node with one or more open files.
*
* kernfs_node->attr.open points to kernfs_open_node. attr.open is
- * protected by kernfs_open_node_lock.
+ * RCU protected.
*
* filp->private_data points to seq_file whose ->private points to
* kernfs_open_file. kernfs_open_files are chained at
* kernfs_open_node->files, which is protected by kernfs_open_file_mutex.
*/
-static DEFINE_SPINLOCK(kernfs_open_node_lock);
static DEFINE_MUTEX(kernfs_open_file_mutex);

struct kernfs_open_node {
+ struct rcu_head rcu_head;
atomic_t event;
wait_queue_head_t poll;
struct list_head files; /* goes through kernfs_open_file.list */
@@ -51,6 +51,52 @@ struct kernfs_open_node {
static DEFINE_SPINLOCK(kernfs_notify_lock);
static struct kernfs_node *kernfs_notify_list = KERNFS_NOTIFY_EOL;

+/**
+ * kernfs_deref_open_node - Get kernfs_open_node corresponding to @kn.
+ *
+ * @of: associated kernfs_open_file instance.
+ * @kn: target kernfs_node.
+ *
+ * Fetch and return ->attr.open of @kn if @of->list is non empty.
+ * If @of->list is not empty we can safely assume that @of is on
+ * @kn->attr.open->files list and this guarantees that @kn->attr.open
+ * will not vanish i.e. dereferencing outside RCU read-side critical
+ * section is safe here.
+ *
+ * The caller needs to make sure that @of->list is not empty.
+ */
+static struct kernfs_open_node *
+kernfs_deref_open_node(struct kernfs_open_file *of, struct kernfs_node *kn)
+{
+ struct kernfs_open_node *on;
+
+ on = rcu_dereference_check(kn->attr.open, !list_empty(&of->list));
+
+ return on;
+}
+
+/**
+ * kernfs_deref_open_node_protected - Get kernfs_open_node corresponding to @kn
+ *
+ * @kn: target kernfs_node.
+ *
+ * Fetch and return ->attr.open of @kn when caller holds the
+ * kernfs_open_file_mutex.
+ *
+ * Update of ->attr.open happens under kernfs_open_file_mutex. So when
+ * the caller guarantees that this mutex is being held, other updaters can't
+ * change ->attr.open and this means that we can safely deref ->attr.open
+ * outside RCU read-side critical section.
+ *
+ * The caller needs to make sure that kernfs_open_file_mutex is held.
+ */
+static struct kernfs_open_node *
+kernfs_deref_open_node_protected(struct kernfs_node *kn)
+{
+ return rcu_dereference_check(kn->attr.open,
+ lockdep_is_held(&kernfs_open_file_mutex));
+}
+
static struct kernfs_open_file *kernfs_of(struct file *file)
{
return ((struct seq_file *)file->private_data)->private;
@@ -156,8 +202,12 @@ static void kernfs_seq_stop(struct seq_file *sf, void *v)
static int kernfs_seq_show(struct seq_file *sf, void *v)
{
struct kernfs_open_file *of = sf->private;
+ struct kernfs_open_node *on = kernfs_deref_open_node(of, of->kn);

- of->event = atomic_read(&of->kn->attr.open->event);
+ if (!on)
+ return -EINVAL;
+
+ of->event = atomic_read(&on->event);

return of->kn->attr.ops->seq_show(sf, v);
}
@@ -180,6 +230,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
struct kernfs_open_file *of = kernfs_of(iocb->ki_filp);
ssize_t len = min_t(size_t, iov_iter_count(iter), PAGE_SIZE);
const struct kernfs_ops *ops;
+ struct kernfs_open_node *on;
char *buf;

buf = of->prealloc_buf;
@@ -201,7 +252,15 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
goto out_free;
}

- of->event = atomic_read(&of->kn->attr.open->event);
+ on = kernfs_deref_open_node(of, of->kn);
+ if (!on) {
+ len = -EINVAL;
+ mutex_unlock(&of->mutex);
+ goto out_free;
+ }
+
+ of->event = atomic_read(&on->event);
+
ops = kernfs_ops(of->kn);
if (ops->read)
len = ops->read(of, buf, len, iocb->ki_pos);
@@ -519,36 +578,29 @@ static int kernfs_get_open_node(struct kernfs_node *kn,
{
struct kernfs_open_node *on, *new_on = NULL;

- retry:
mutex_lock(&kernfs_open_file_mutex);
- spin_lock_irq(&kernfs_open_node_lock);
-
- if (!kn->attr.open && new_on) {
- kn->attr.open = new_on;
- new_on = NULL;
- }
-
- on = kn->attr.open;
- if (on)
- list_add_tail(&of->list, &on->files);
-
- spin_unlock_irq(&kernfs_open_node_lock);
- mutex_unlock(&kernfs_open_file_mutex);
+ on = kernfs_deref_open_node_protected(kn);

if (on) {
- kfree(new_on);
+ list_add_tail(&of->list, &on->files);
+ mutex_unlock(&kernfs_open_file_mutex);
return 0;
+ } else {
+ /* not there, initialize a new one */
+ new_on = kmalloc(sizeof(*new_on), GFP_KERNEL);
+ if (!new_on) {
+ mutex_unlock(&kernfs_open_file_mutex);
+ return -ENOMEM;
+ }
+ atomic_set(&new_on->event, 1);
+ init_waitqueue_head(&new_on->poll);
+ INIT_LIST_HEAD(&new_on->files);
+ list_add_tail(&of->list, &new_on->files);
+ rcu_assign_pointer(kn->attr.open, new_on);
}
+ mutex_unlock(&kernfs_open_file_mutex);

- /* not there, initialize a new one and retry */
- new_on = kmalloc(sizeof(*new_on), GFP_KERNEL);
- if (!new_on)
- return -ENOMEM;
-
- atomic_set(&new_on->event, 1);
- init_waitqueue_head(&new_on->poll);
- INIT_LIST_HEAD(&new_on->files);
- goto retry;
+ return 0;
}

/**
@@ -567,24 +619,25 @@ static int kernfs_get_open_node(struct kernfs_node *kn,
static void kernfs_unlink_open_file(struct kernfs_node *kn,
struct kernfs_open_file *of)
{
- struct kernfs_open_node *on = kn->attr.open;
- unsigned long flags;
+ struct kernfs_open_node *on;

mutex_lock(&kernfs_open_file_mutex);
- spin_lock_irqsave(&kernfs_open_node_lock, flags);
+
+ on = kernfs_deref_open_node_protected(kn);
+ if (!on) {
+ mutex_unlock(&kernfs_open_file_mutex);
+ return;
+ }

if (of)
list_del(&of->list);

- if (list_empty(&on->files))
- kn->attr.open = NULL;
- else
- on = NULL;
+ if (list_empty(&on->files)) {
+ rcu_assign_pointer(kn->attr.open, NULL);
+ kfree_rcu(on, rcu_head);
+ }

- spin_unlock_irqrestore(&kernfs_open_node_lock, flags);
mutex_unlock(&kernfs_open_file_mutex);
-
- kfree(on);
}

static int kernfs_fop_open(struct inode *inode, struct file *file)
@@ -774,17 +827,16 @@ void kernfs_drain_open_files(struct kernfs_node *kn)
* check under kernfs_open_file_mutex will ensure bailing out if
* ->attr.open became NULL while waiting for the mutex.
*/
- if (!kn->attr.open)
+ if (!rcu_access_pointer(kn->attr.open))
return;

mutex_lock(&kernfs_open_file_mutex);
- if (!kn->attr.open) {
+ on = kernfs_deref_open_node_protected(kn);
+ if (!on) {
mutex_unlock(&kernfs_open_file_mutex);
return;
}

- on = kn->attr.open;
-
list_for_each_entry(of, &on->files, list) {
struct inode *inode = file_inode(of->file);

@@ -815,7 +867,10 @@ void kernfs_drain_open_files(struct kernfs_node *kn)
__poll_t kernfs_generic_poll(struct kernfs_open_file *of, poll_table *wait)
{
struct kernfs_node *kn = kernfs_dentry_node(of->file->f_path.dentry);
- struct kernfs_open_node *on = kn->attr.open;
+ struct kernfs_open_node *on = kernfs_deref_open_node(of, kn);
+
+ if (!on)
+ return EPOLLERR;

poll_wait(of->file, &on->poll, wait);

@@ -922,13 +977,13 @@ void kernfs_notify(struct kernfs_node *kn)
return;

/* kick poll immediately */
- spin_lock_irqsave(&kernfs_open_node_lock, flags);
- on = kn->attr.open;
+ rcu_read_lock();
+ on = rcu_dereference(kn->attr.open);
if (on) {
atomic_inc(&on->event);
wake_up_interruptible(&on->poll);
}
- spin_unlock_irqrestore(&kernfs_open_node_lock, flags);
+ rcu_read_unlock();

/* schedule work to kick fsnotify */
spin_lock_irqsave(&kernfs_notify_lock, flags);
diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
index e2ae15a6225e8..13f54f078a52a 100644
--- a/include/linux/kernfs.h
+++ b/include/linux/kernfs.h
@@ -114,7 +114,7 @@ struct kernfs_elem_symlink {

struct kernfs_elem_attr {
const struct kernfs_ops *ops;
- struct kernfs_open_node *open;
+ struct kernfs_open_node __rcu *open;
loff_t size;
struct kernfs_node *notify_next; /* for kernfs_notify() */
};
--
2.30.2

2022-06-14 16:26:04

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v6 1/4] kernfs: make ->attr.open RCU protected.

On Tue, Jun 14, 2022 at 05:03:43PM +1000, Imran Khan wrote:
> +/**
> + * kernfs_deref_open_node_protected - Get kernfs_open_node corresponding to @kn
> + *
> + * @kn: target kernfs_node.
> + *
> + * Fetch and return ->attr.open of @kn when caller holds the
> + * kernfs_open_file_mutex.
> + *
> + * Update of ->attr.open happens under kernfs_open_file_mutex. So when
> + * the caller guarantees that this mutex is being held, other updaters can't
> + * change ->attr.open and this means that we can safely deref ->attr.open
> + * outside RCU read-side critical section.
> + *
> + * The caller needs to make sure that kernfs_open_file_mutex is held.
> + */
> +static struct kernfs_open_node *
> +kernfs_deref_open_node_protected(struct kernfs_node *kn)
> +{
> + return rcu_dereference_check(kn->attr.open,
> + lockdep_is_held(&kernfs_open_file_mutex));

Hey, so, the difference between rcu_dereference_check() and
rcu_dereference_protected() is that the former can be called either with rcu
read locked or under the extra condition (here, open_file_mutex held) while
the latter can't be used under rcu read lock. The two can generate different
codes too - the former enforces dependency ordering which makes accesses
under rcu read lock safe, while the latter doesn't.

In the above, you're saying that the accessor is only to be used while
holding kernfs_open_file_mutex but then using rcu_dereference_check() which
is odd. There are two ways you can go 1. ensure that the accessor is always
used under the mutex and use rcu_dereference_protected() or 2. if the
function can be used under rcu read lock, rename so that the differentiation
between the two accessors is based on the parameter type, not whether
they're protected or not.

Can you please post the updated patch as a reply to this one? No need to
post the whole thing over and over again.

Thanks.

--
tejun

2022-06-15 02:27:51

by Imran Khan

[permalink] [raw]
Subject: Re: [PATCH v6 1/4] kernfs: make ->attr.open RCU protected.

Hello Tejun,

On 15/6/22 2:20 am, Tejun Heo wrote:
> On Tue, Jun 14, 2022 at 05:03:43PM +1000, Imran Khan wrote:
>> +/**
[...]
>
> Hey, so, the difference between rcu_dereference_check() and
> rcu_dereference_protected() is that the former can be called either with rcu
> read locked or under the extra condition (here, open_file_mutex held) while
> the latter can't be used under rcu read lock. The two can generate different
> codes too - the former enforces dependency ordering which makes accesses
> under rcu read lock safe, while the latter doesn't.
>
> In the above, you're saying that the accessor is only to be used while
> holding kernfs_open_file_mutex but then using rcu_dereference_check() which
> is odd. There are two ways you can go 1. ensure that the accessor is always
> used under the mutex and use rcu_dereference_protected() or 2. if the
> function can be used under rcu read lock, rename so that the differentiation
> between the two accessors is based on the parameter type, not whether
> they're protected or not.
>
I am going with option 1 suggested above, since the accessor will always operate
under kernfs_open_file_mutex.

> Can you please post the updated patch as a reply to this one? No need to
> post the whole thing over and over again.
>
I am sending the full patch-set because after modifying
PATCH-1 we will get a conflict like below when applying previous version of PATCH-3

static struct kernfs_open_node *
kernfs_deref_open_node_protected(struct kernfs_node *kn)
{
<<<<<<< HEAD
return rcu_dereference_protected(kn->attr.open,
lockdep_is_held(&kernfs_open_file_mutex));
=======
return rcu_dereference_check(kn->attr.open,
lockdep_is_held(kernfs_open_file_mutex_ptr(kn)));
>>>>>>> 80411dbfe1890 (kernfs: Introduce interface to access global
kernfs_open_file_mutex.)

Full patch-set will avoid this small conflict. I hope sending full
patch-set is okay for this time. The full patch-set (v7) is available at [1]

Thanks
-- Imran

[1]:https://lore.kernel.org/lkml/[email protected]/

2022-06-15 06:25:07

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v6 1/4] kernfs: make ->attr.open RCU protected.

On Wed, Jun 15, 2022 at 12:13:48PM +1000, Imran Khan wrote:
> Full patch-set will avoid this small conflict. I hope sending full
> patch-set is okay for this time. The full patch-set (v7) is available at [1]

It's just easier to focus on a single patch when repatedly iterating on it
like this. It's easy to repost the whole thing when the patch is settled. It
becomes a bit tiring to keep reposting essentially the same patchset over
and over again.

Thanks.

--
tejun