Hi,
I have just learnt about KOBJ_{ADD,REMOVE} sysfs events triggered on
kmem cache creation/removal when SLUB is configured. This functionality
goes all the way down to initial SLUB merge. I do not see any references
in the Documentation explaining what those events are used for and
whether there are any real users.
Could you shed some more light into this?
--
Michal Hocko
SUSE Labs
On Tue, 26 Nov 2019, Michal Hocko wrote:
> Hi,
> I have just learnt about KOBJ_{ADD,REMOVE} sysfs events triggered on
> kmem cache creation/removal when SLUB is configured. This functionality
> goes all the way down to initial SLUB merge. I do not see any references
> in the Documentation explaining what those events are used for and
> whether there are any real users.
>
> Could you shed some more light into this?
I have no idea about what this is. There have been many people who
reworked the sysfs support and this has been the cause for a lot of
breakage over the years.
On Tue 26-11-19 16:32:56, Cristopher Lameter wrote:
> On Tue, 26 Nov 2019, Michal Hocko wrote:
>
> > Hi,
> > I have just learnt about KOBJ_{ADD,REMOVE} sysfs events triggered on
> > kmem cache creation/removal when SLUB is configured. This functionality
> > goes all the way down to initial SLUB merge. I do not see any references
> > in the Documentation explaining what those events are used for and
> > whether there are any real users.
> >
> > Could you shed some more light into this?
>
> I have no idea about what this is.
It seems to be there since the initial merge. I suspect this is just
following a generic sysfs rule that each file has to provide those
events?
> There have been many people who
> reworked the sysfs support and this has been the cause for a lot of
> breakage over the years.
Remember any specifics?
I am mostly interested in potential users. In other words I am thinking
to suppress those events. There is already ke knob to control existence
of memcg caches but I do not see anything like this for root caches.
--
Michal Hocko
SUSE Labs
On Tue, 26 Nov 2019, Michal Hocko wrote:
> > I have no idea about what this is.
>
> It seems to be there since the initial merge. I suspect this is just
> following a generic sysfs rule that each file has to provide those
> events?
I have never heard of anyone using this.
> > There have been many people who
> > reworked the sysfs support and this has been the cause for a lot of
> > breakage over the years.
>
> Remember any specifics?
The sequencing of setup / teardown of sysfs entries has frequently been
a problem and that caused numerous issues with slab initialization as well
as kmem cache creation. Initially kmalloc DMA caches were created on
demand which caused some issues. Then there was the back and forth with
cache aliasing during kmem_cache_create() that caused another set of
instabilities.
> I am mostly interested in potential users. In other words I am thinking
> to suppress those events. There is already ke knob to control existence
> of memcg caches but I do not see anything like this for root caches.
>
I am not aware of any users but the deployments of Linux are so diverse
these days that I am not sure that there are no users.
On Wed 27-11-19 15:40:19, Cristopher Lameter wrote:
> On Tue, 26 Nov 2019, Michal Hocko wrote:
>
> > > I have no idea about what this is.
> >
> > It seems to be there since the initial merge. I suspect this is just
> > following a generic sysfs rule that each file has to provide those
> > events?
>
> I have never heard of anyone using this.
>
> > > There have been many people who
> > > reworked the sysfs support and this has been the cause for a lot of
> > > breakage over the years.
> >
> > Remember any specifics?
>
> The sequencing of setup / teardown of sysfs entries has frequently been
> a problem and that caused numerous issues with slab initialization as well
> as kmem cache creation. Initially kmalloc DMA caches were created on
> demand which caused some issues. Then there was the back and forth with
> cache aliasing during kmem_cache_create() that caused another set of
> instabilities.
>
> > I am mostly interested in potential users. In other words I am thinking
> > to suppress those events. There is already ke knob to control existence
> > of memcg caches but I do not see anything like this for root caches.
> >
>
> I am not aware of any users but the deployments of Linux are so diverse
> these days that I am not sure that there are no users.
Would you mind a patch that would add a kernel command line parameter
that would work like memcg_sysfs_enabled? The default for the config
would be on. Or it would be preferrable to simply drop only events?
--
Michal Hocko
SUSE Labs
On Wed, 27 Nov 2019, Michal Hocko wrote:
> Would you mind a patch that would add a kernel command line parameter
> that would work like memcg_sysfs_enabled? The default for the config
> would be on. Or it would be preferrable to simply drop only events?
Just drop the events may be best. Then we know if someone is using it.
On Wed 27-11-19 16:26:11, Cristopher Lameter wrote:
> On Wed, 27 Nov 2019, Michal Hocko wrote:
>
> > Would you mind a patch that would add a kernel command line parameter
> > that would work like memcg_sysfs_enabled? The default for the config
> > would be on. Or it would be preferrable to simply drop only events?
>
> Just drop the events may be best. Then we know if someone is using it.
I would be worried that a lack of events might be surprising and a
potential userspace wouldn't know that something has changed.
--
Michal Hocko
SUSE Labs
On Wed 27-11-19 18:43:17, Michal Hocko wrote:
> On Wed 27-11-19 16:26:11, Cristopher Lameter wrote:
> > On Wed, 27 Nov 2019, Michal Hocko wrote:
> >
> > > Would you mind a patch that would add a kernel command line parameter
> > > that would work like memcg_sysfs_enabled? The default for the config
> > > would be on. Or it would be preferrable to simply drop only events?
> >
> > Just drop the events may be best. Then we know if someone is using it.
>
> I would be worried that a lack of events might be surprising and a
> potential userspace wouldn't know that something has changed.
It would be great to land with some decision here.
--
Michal Hocko
SUSE Labs
On Wed 04-12-19 15:25:43, Cristopher Lameter wrote:
> On Wed, 4 Dec 2019, Michal Hocko wrote:
>
> > > > Just drop the events may be best. Then we know if someone is using it.
> > >
> > > I would be worried that a lack of events might be surprising and a
> > > potential userspace wouldn't know that something has changed.
> >
> > It would be great to land with some decision here.
>
> Drop the events. These are internal kernel structures and supporting
> notifiers to userspace is a bit unusual.
As I've said I believe it is quite risky. But if you as a maintainer
believe this is the right thing to do I will not object. Care to send a
patch?
--
Michal Hocko
SUSE Labs
On Wed, 4 Dec 2019, Michal Hocko wrote:
> > > Just drop the events may be best. Then we know if someone is using it.
> >
> > I would be worried that a lack of events might be surprising and a
> > potential userspace wouldn't know that something has changed.
>
> It would be great to land with some decision here.
Drop the events. These are internal kernel structures and supporting
notifiers to userspace is a bit unusual.
On Wed, 4 Dec 2019, Michal Hocko wrote:
> As I've said I believe it is quite risky. But if you as a maintainer
> believe this is the right thing to do I will not object. Care to send a
> patch?
From: Christoph Lameter <[email protected]>
Subject: slub: Remove userspace notifier for cache add/remove
Kmem caches are internal kernel structures so it is strange that
userspace notifiers would be needed. And I am not aware of any use
of these notifiers. These notifiers may just exist because in the
initial slub release the sysfs code was copied from another
subsystem.
Signed-off-by: Christoph Lameter <[email protected]>
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2019-12-02 15:13:23.948312925 +0000
+++ linux/mm/slub.c 2019-12-04 16:32:34.648550310 +0000
@@ -5632,19 +5632,6 @@ static struct kobj_type slab_ktype = {
.release = kmem_cache_release,
};
-static int uevent_filter(struct kset *kset, struct kobject *kobj)
-{
- struct kobj_type *ktype = get_ktype(kobj);
-
- if (ktype == &slab_ktype)
- return 1;
- return 0;
-}
-
-static const struct kset_uevent_ops slab_uevent_ops = {
- .filter = uevent_filter,
-};
-
static struct kset *slab_kset;
static inline struct kset *cache_kset(struct kmem_cache *s)
@@ -5712,7 +5699,6 @@ static void sysfs_slab_remove_workfn(str
#ifdef CONFIG_MEMCG
kset_unregister(s->memcg_kset);
#endif
- kobject_uevent(&s->kobj, KOBJ_REMOVE);
out:
kobject_put(&s->kobj);
}
@@ -5770,7 +5756,6 @@ static int sysfs_slab_add(struct kmem_ca
}
#endif
- kobject_uevent(&s->kobj, KOBJ_ADD);
if (!unmergeable) {
/* Setup first alias */
sysfs_slab_alias(s, s->name);
@@ -5851,7 +5836,7 @@ static int __init slab_sysfs_init(void)
mutex_lock(&slab_mutex);
- slab_kset = kset_create_and_add("slab", &slab_uevent_ops, kernel_kobj);
+ slab_kset = kset_create_and_add("slab", NULL, kernel_kobj);
if (!slab_kset) {
mutex_unlock(&slab_mutex);
pr_err("Cannot register slab subsystem.\n");
[Cc akpm - the email thread starts
http://lkml.kernel.org/r/[email protected]]
On Wed 04-12-19 16:53:47, Cristopher Lameter wrote:
> On Wed, 4 Dec 2019, Michal Hocko wrote:
>
> > As I've said I believe it is quite risky. But if you as a maintainer
> > believe this is the right thing to do I will not object. Care to send a
> > patch?
>
> From: Christoph Lameter <[email protected]>
> Subject: slub: Remove userspace notifier for cache add/remove
>
> Kmem caches are internal kernel structures so it is strange that
> userspace notifiers would be needed. And I am not aware of any use
> of these notifiers. These notifiers may just exist because in the
> initial slub release the sysfs code was copied from another
> subsystem.
>
> Signed-off-by: Christoph Lameter <[email protected]>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2019-12-02 15:13:23.948312925 +0000
> +++ linux/mm/slub.c 2019-12-04 16:32:34.648550310 +0000
> @@ -5632,19 +5632,6 @@ static struct kobj_type slab_ktype = {
> .release = kmem_cache_release,
> };
>
> -static int uevent_filter(struct kset *kset, struct kobject *kobj)
> -{
> - struct kobj_type *ktype = get_ktype(kobj);
> -
> - if (ktype == &slab_ktype)
> - return 1;
> - return 0;
> -}
> -
> -static const struct kset_uevent_ops slab_uevent_ops = {
> - .filter = uevent_filter,
> -};
> -
> static struct kset *slab_kset;
>
> static inline struct kset *cache_kset(struct kmem_cache *s)
> @@ -5712,7 +5699,6 @@ static void sysfs_slab_remove_workfn(str
> #ifdef CONFIG_MEMCG
> kset_unregister(s->memcg_kset);
> #endif
> - kobject_uevent(&s->kobj, KOBJ_REMOVE);
> out:
> kobject_put(&s->kobj);
> }
> @@ -5770,7 +5756,6 @@ static int sysfs_slab_add(struct kmem_ca
> }
> #endif
>
> - kobject_uevent(&s->kobj, KOBJ_ADD);
> if (!unmergeable) {
> /* Setup first alias */
> sysfs_slab_alias(s, s->name);
> @@ -5851,7 +5836,7 @@ static int __init slab_sysfs_init(void)
>
> mutex_lock(&slab_mutex);
>
> - slab_kset = kset_create_and_add("slab", &slab_uevent_ops, kernel_kobj);
> + slab_kset = kset_create_and_add("slab", NULL, kernel_kobj);
> if (!slab_kset) {
> mutex_unlock(&slab_mutex);
> pr_err("Cannot register slab subsystem.\n");
--
Michal Hocko
SUSE Labs
On Wed 04-12-19 18:32:24, Michal Hocko wrote:
> [Cc akpm - the email thread starts
> http://lkml.kernel.org/r/[email protected]]
ping.
> On Wed 04-12-19 16:53:47, Cristopher Lameter wrote:
> > On Wed, 4 Dec 2019, Michal Hocko wrote:
> >
> > > As I've said I believe it is quite risky. But if you as a maintainer
> > > believe this is the right thing to do I will not object. Care to send a
> > > patch?
> >
> > From: Christoph Lameter <[email protected]>
> > Subject: slub: Remove userspace notifier for cache add/remove
> >
> > Kmem caches are internal kernel structures so it is strange that
> > userspace notifiers would be needed. And I am not aware of any use
> > of these notifiers. These notifiers may just exist because in the
> > initial slub release the sysfs code was copied from another
> > subsystem.
> >
> > Signed-off-by: Christoph Lameter <[email protected]>
> >
> > Index: linux/mm/slub.c
> > ===================================================================
> > --- linux.orig/mm/slub.c 2019-12-02 15:13:23.948312925 +0000
> > +++ linux/mm/slub.c 2019-12-04 16:32:34.648550310 +0000
> > @@ -5632,19 +5632,6 @@ static struct kobj_type slab_ktype = {
> > .release = kmem_cache_release,
> > };
> >
> > -static int uevent_filter(struct kset *kset, struct kobject *kobj)
> > -{
> > - struct kobj_type *ktype = get_ktype(kobj);
> > -
> > - if (ktype == &slab_ktype)
> > - return 1;
> > - return 0;
> > -}
> > -
> > -static const struct kset_uevent_ops slab_uevent_ops = {
> > - .filter = uevent_filter,
> > -};
> > -
> > static struct kset *slab_kset;
> >
> > static inline struct kset *cache_kset(struct kmem_cache *s)
> > @@ -5712,7 +5699,6 @@ static void sysfs_slab_remove_workfn(str
> > #ifdef CONFIG_MEMCG
> > kset_unregister(s->memcg_kset);
> > #endif
> > - kobject_uevent(&s->kobj, KOBJ_REMOVE);
> > out:
> > kobject_put(&s->kobj);
> > }
> > @@ -5770,7 +5756,6 @@ static int sysfs_slab_add(struct kmem_ca
> > }
> > #endif
> >
> > - kobject_uevent(&s->kobj, KOBJ_ADD);
> > if (!unmergeable) {
> > /* Setup first alias */
> > sysfs_slab_alias(s, s->name);
> > @@ -5851,7 +5836,7 @@ static int __init slab_sysfs_init(void)
> >
> > mutex_lock(&slab_mutex);
> >
> > - slab_kset = kset_create_and_add("slab", &slab_uevent_ops, kernel_kobj);
> > + slab_kset = kset_create_and_add("slab", NULL, kernel_kobj);
> > if (!slab_kset) {
> > mutex_unlock(&slab_mutex);
> > pr_err("Cannot register slab subsystem.\n");
>
> --
> Michal Hocko
> SUSE Labs
--
Michal Hocko
SUSE Labs
On Mon, 6 Jan 2020, Michal Hocko wrote:
> On Wed 04-12-19 18:32:24, Michal Hocko wrote:
> > [Cc akpm - the email thread starts
> > http://lkml.kernel.org/r/[email protected]]
>
> ping.
There does not seem to be much of an interest in the patch?
On 12/4/19 5:53 PM, Christopher Lameter wrote:
> On Wed, 4 Dec 2019, Michal Hocko wrote:
>
>> As I've said I believe it is quite risky. But if you as a maintainer
>> believe this is the right thing to do I will not object. Care to send a
>> patch?
>
> From: Christoph Lameter <[email protected]>
> Subject: slub: Remove userspace notifier for cache add/remove
>
> Kmem caches are internal kernel structures so it is strange that
> userspace notifiers would be needed. And I am not aware of any use
> of these notifiers. These notifiers may just exist because in the
> initial slub release the sysfs code was copied from another
> subsystem.
>
> Signed-off-by: Christoph Lameter <[email protected]>
If anyone cares, we'll find out eventually.
Acked-by: Vlastimil Babka <[email protected]>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2019-12-02 15:13:23.948312925 +0000
> +++ linux/mm/slub.c 2019-12-04 16:32:34.648550310 +0000
> @@ -5632,19 +5632,6 @@ static struct kobj_type slab_ktype = {
> .release = kmem_cache_release,
> };
>
> -static int uevent_filter(struct kset *kset, struct kobject *kobj)
> -{
> - struct kobj_type *ktype = get_ktype(kobj);
> -
> - if (ktype == &slab_ktype)
> - return 1;
> - return 0;
> -}
> -
> -static const struct kset_uevent_ops slab_uevent_ops = {
> - .filter = uevent_filter,
> -};
> -
> static struct kset *slab_kset;
>
> static inline struct kset *cache_kset(struct kmem_cache *s)
> @@ -5712,7 +5699,6 @@ static void sysfs_slab_remove_workfn(str
> #ifdef CONFIG_MEMCG
> kset_unregister(s->memcg_kset);
> #endif
> - kobject_uevent(&s->kobj, KOBJ_REMOVE);
> out:
> kobject_put(&s->kobj);
> }
> @@ -5770,7 +5756,6 @@ static int sysfs_slab_add(struct kmem_ca
> }
> #endif
>
> - kobject_uevent(&s->kobj, KOBJ_ADD);
> if (!unmergeable) {
> /* Setup first alias */
> sysfs_slab_alias(s, s->name);
> @@ -5851,7 +5836,7 @@ static int __init slab_sysfs_init(void)
>
> mutex_lock(&slab_mutex);
>
> - slab_kset = kset_create_and_add("slab", &slab_uevent_ops, kernel_kobj);
> + slab_kset = kset_create_and_add("slab", NULL, kernel_kobj);
> if (!slab_kset) {
> mutex_unlock(&slab_mutex);
> pr_err("Cannot register slab subsystem.\n");
>
On Mon 06-01-20 15:51:26, Cristopher Lameter wrote:
> On Mon, 6 Jan 2020, Michal Hocko wrote:
>
> > On Wed 04-12-19 18:32:24, Michal Hocko wrote:
> > > [Cc akpm - the email thread starts
> > > http://lkml.kernel.org/r/[email protected]]
> >
> > ping.
>
> There does not seem to be much of an interest in the patch?
It seems it has just fallen through cracks.
--
Michal Hocko
SUSE Labs
On Thu, 9 Jan 2020 15:52:36 +0100 Michal Hocko <[email protected]> wrote:
> On Mon 06-01-20 15:51:26, Cristopher Lameter wrote:
> > On Mon, 6 Jan 2020, Michal Hocko wrote:
> >
> > > On Wed 04-12-19 18:32:24, Michal Hocko wrote:
> > > > [Cc akpm - the email thread starts
> > > > http://lkml.kernel.org/r/[email protected]]
> > >
> > > ping.
> >
> > There does not seem to be much of an interest in the patch?
>
> It seems it has just fallen through cracks.
I looked at it - there wasn't really any compelling followup.
If this change should be pursued then can we please have a formal
resend?
On Thu 09-01-20 11:44:15, Andrew Morton wrote:
> On Thu, 9 Jan 2020 15:52:36 +0100 Michal Hocko <[email protected]> wrote:
>
> > On Mon 06-01-20 15:51:26, Cristopher Lameter wrote:
> > > On Mon, 6 Jan 2020, Michal Hocko wrote:
> > >
> > > > On Wed 04-12-19 18:32:24, Michal Hocko wrote:
> > > > > [Cc akpm - the email thread starts
> > > > > http://lkml.kernel.org/r/[email protected]]
> > > >
> > > > ping.
> > >
> > > There does not seem to be much of an interest in the patch?
> >
> > It seems it has just fallen through cracks.
>
> I looked at it - there wasn't really any compelling followup.
Btw. I would appreciate if this was explicit because if there is no
reaction then it is not clear whether the patch has slipped through
cracks or it is not worth pursuing for whatever reason.
Thanks!
--
Michal Hocko
SUSE Labs
On Thu 09-01-20 11:44:15, Andrew Morton wrote:
> On Thu, 9 Jan 2020 15:52:36 +0100 Michal Hocko <[email protected]> wrote:
>
> > On Mon 06-01-20 15:51:26, Cristopher Lameter wrote:
> > > On Mon, 6 Jan 2020, Michal Hocko wrote:
> > >
> > > > On Wed 04-12-19 18:32:24, Michal Hocko wrote:
> > > > > [Cc akpm - the email thread starts
> > > > > http://lkml.kernel.org/r/[email protected]]
> > > >
> > > > ping.
> > >
> > > There does not seem to be much of an interest in the patch?
> >
> > It seems it has just fallen through cracks.
>
> I looked at it - there wasn't really any compelling followup.
The primary motivation is the pointless udev event for each created
cache. There are not that many on the global case but memcg just adds
up.
--
Michal Hocko
SUSE Labs
Hello.
On Thu, Jan 09, 2020 at 11:44:15AM -0800, Andrew Morton <[email protected]> wrote:
> I looked at it - there wasn't really any compelling followup.
FTR, I noticed udevd consuming non-negligible CPU cycles when doing some
cgroup stress testing. And even extrapolating to less artificial
situations, the udev events seem to cause useless tickling of udevd.
I used the simple script below
cat measure.sh <<EOD
sample() {
local n=$(echo|awk "END {print int(40/$1)}")
for i in $(seq $n) ; do
mkdir /sys/fs/cgroup/memory/grp1 ;
echo 0 >/sys/fs/cgroup/memory/grp1/cgroup.procs ;
/usr/bin/sleep $1 ;
echo 0 >/sys/fs/cgroup/memory/cgroup.procs ;
rmdir /sys/fs/cgroup/memory/grp1 ;
done
}
for d in 0.004 0.008 0.016 0.032 0.064 0.128 0.256 0.5 1 ; do
echo 0 >/sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
time sample $d 2>&1 | grep real
echo -n "udev "
cat /sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
done
EOD
and I drew the following ballpark conclusion:
1.7% CPU time at 1 event/s -> 60 event/s 100% cpu
(The event is one mkdir/migrate/rmdir sequence. Numbers are from dummy
test VM, so take with a grain of salt.)
> If this change should be pursued then can we please have a formal
> resend?
Who's supposed to do that?
Regards,
Michal
On Fri, 17 Jan 2020 18:13:31 +0100 Michal Koutn? <[email protected]> wrote:
> Hello.
>
> On Thu, Jan 09, 2020 at 11:44:15AM -0800, Andrew Morton <[email protected]> wrote:
> > I looked at it - there wasn't really any compelling followup.
> FTR, I noticed udevd consuming non-negligible CPU cycles when doing some
> cgroup stress testing. And even extrapolating to less artificial
> situations, the udev events seem to cause useless tickling of udevd.
>
> I used the simple script below
> cat measure.sh <<EOD
> sample() {
> local n=$(echo|awk "END {print int(40/$1)}")
>
> for i in $(seq $n) ; do
> mkdir /sys/fs/cgroup/memory/grp1 ;
> echo 0 >/sys/fs/cgroup/memory/grp1/cgroup.procs ;
> /usr/bin/sleep $1 ;
> echo 0 >/sys/fs/cgroup/memory/cgroup.procs ;
> rmdir /sys/fs/cgroup/memory/grp1 ;
> done
> }
>
> for d in 0.004 0.008 0.016 0.032 0.064 0.128 0.256 0.5 1 ; do
> echo 0 >/sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
> time sample $d 2>&1 | grep real
> echo -n "udev "
> cat /sys/fs/cgroup/cpuacct/system.slice/systemd-udevd.service/cpuacct.usage
> done
> EOD
>
> and I drew the following ballpark conclusion:
> 1.7% CPU time at 1 event/s -> 60 event/s 100% cpu
>
> (The event is one mkdir/migrate/rmdir sequence. Numbers are from dummy
> test VM, so take with a grain of salt.)
Thanks. What effect does this patch have upon these numbers?
>
> > If this change should be pursued then can we please have a formal
> > resend?
> Who's supposed to do that?
Typically the author, but not always. If someone else is particularly
motivated to get a patch merged up they can take it over.
On Sat, Jan 18, 2020 at 04:15:28PM -0800, Andrew Morton <[email protected]> wrote:
> > situations, the udev events seem to cause useless tickling of udevd.
> > [...]
> > and I drew the following ballpark conclusion:
> > 1.7% CPU time at 1 event/s -> 60 event/s 100% cpu
> Thanks. What effect does this patch have upon these numbers?
When I rerun the script with patched kernel, udev sit mostly idle (there
were no other udev event sources). So the number can be said to drop to
0% CPU time / event/s.
> Typically the author, but not always. If someone else is particularly
> motivated to get a patch merged up they can take it over.
Christopher, do you consider resending your patch? (I second that it
exposes the internal details (wrt cgroup caches) and I can observe the
just reading the events by udevd wastes CPU time.)
Thanks,
Michal
On Mon, 27 Jan 2020, Michal Koutn? wrote:
> When I rerun the script with patched kernel, udev sit mostly idle (there
> were no other udev event sources). So the number can be said to drop to
> 0% CPU time / event/s.
>
> > Typically the author, but not always. If someone else is particularly
> > motivated to get a patch merged up they can take it over.
> Christopher, do you consider resending your patch? (I second that it
> exposes the internal details (wrt cgroup caches) and I can observe the
> just reading the events by udevd wastes CPU time.)
The patch exposes details of cgroup caches? Which patch are we talking
about?
On Mon, Jan 27, 2020 at 11:04:53PM +0000, Christopher Lameter <[email protected]> wrote:
> The patch exposes details of cgroup caches? Which patch are we talking
> about?
Sorry, that's misunderstanding. I mean the current state (sending
uevents) exposes the internals (creation of caches per cgroup). The
patch [1] removing uevent notifications is rectifying it.
Michal
[1] https://lore.kernel.org/lkml/[email protected]/
On Tue, 28 Jan 2020, Michal Koutn? wrote:
> On Mon, Jan 27, 2020 at 11:04:53PM +0000, Christopher Lameter <[email protected]> wrote:
> > The patch exposes details of cgroup caches? Which patch are we talking
> > about?
> Sorry, that's misunderstanding. I mean the current state (sending
> uevents) exposes the internals (creation of caches per cgroup). The
> patch [1] removing uevent notifications is rectifying it.
From: Christoph Lameter <[email protected]>
Subject: slub: Remove userspace notifier for cache add/remove
Kmem caches are internal kernel structures so it is strange that
userspace notifiers would be needed. And I am not aware of any use
of these notifiers. These notifiers may just exist because in the
initial slub release the sysfs code was copied from another
subsystem.
Signed-off-by: Christoph Lameter <[email protected]>
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2020-01-28 18:13:02.134506141 +0000
+++ linux/mm/slub.c 2020-01-28 18:13:02.134506141 +0000
@@ -5632,19 +5632,6 @@ static struct kobj_type slab_ktype = {
.release = kmem_cache_release,
};
-static int uevent_filter(struct kset *kset, struct kobject *kobj)
-{
- struct kobj_type *ktype = get_ktype(kobj);
-
- if (ktype == &slab_ktype)
- return 1;
- return 0;
-}
-
-static const struct kset_uevent_ops slab_uevent_ops = {
- .filter = uevent_filter,
-};
-
static struct kset *slab_kset;
static inline struct kset *cache_kset(struct kmem_cache *s)
@@ -5712,7 +5699,6 @@ static void sysfs_slab_remove_workfn(str
#ifdef CONFIG_MEMCG
kset_unregister(s->memcg_kset);
#endif
- kobject_uevent(&s->kobj, KOBJ_REMOVE);
out:
kobject_put(&s->kobj);
}
@@ -5770,7 +5756,6 @@ static int sysfs_slab_add(struct kmem_ca
}
#endif
- kobject_uevent(&s->kobj, KOBJ_ADD);
if (!unmergeable) {
/* Setup first alias */
sysfs_slab_alias(s, s->name);
@@ -5851,7 +5836,7 @@ static int __init slab_sysfs_init(void)
mutex_lock(&slab_mutex);
- slab_kset = kset_create_and_add("slab", &slab_uevent_ops, kernel_kobj);
+ slab_kset = kset_create_and_add("slab", NULL, kernel_kobj);
if (!slab_kset) {
mutex_unlock(&slab_mutex);
pr_err("Cannot register slab subsystem.\n");
On 1/28/20 7:13 PM, Christopher Lameter wrote:
> On Tue, 28 Jan 2020, Michal Koutn? wrote:
>
>> On Mon, Jan 27, 2020 at 11:04:53PM +0000, Christopher Lameter <[email protected]> wrote:
>>> The patch exposes details of cgroup caches? Which patch are we talking
>>> about?
>> Sorry, that's misunderstanding. I mean the current state (sending
>> uevents) exposes the internals (creation of caches per cgroup). The
>> patch [1] removing uevent notifications is rectifying it.
>
>
> From: Christoph Lameter <[email protected]>
> Subject: slub: Remove userspace notifier for cache add/remove
>
> Kmem caches are internal kernel structures so it is strange that
> userspace notifiers would be needed. And I am not aware of any use
> of these notifiers. These notifiers may just exist because in the
> initial slub release the sysfs code was copied from another
> subsystem.
>
> Signed-off-by: Christoph Lameter <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2020-01-28 18:13:02.134506141 +0000
> +++ linux/mm/slub.c 2020-01-28 18:13:02.134506141 +0000
> @@ -5632,19 +5632,6 @@ static struct kobj_type slab_ktype = {
> .release = kmem_cache_release,
> };
>
> -static int uevent_filter(struct kset *kset, struct kobject *kobj)
> -{
> - struct kobj_type *ktype = get_ktype(kobj);
> -
> - if (ktype == &slab_ktype)
> - return 1;
> - return 0;
> -}
> -
> -static const struct kset_uevent_ops slab_uevent_ops = {
> - .filter = uevent_filter,
> -};
> -
> static struct kset *slab_kset;
>
> static inline struct kset *cache_kset(struct kmem_cache *s)
> @@ -5712,7 +5699,6 @@ static void sysfs_slab_remove_workfn(str
> #ifdef CONFIG_MEMCG
> kset_unregister(s->memcg_kset);
> #endif
> - kobject_uevent(&s->kobj, KOBJ_REMOVE);
> out:
> kobject_put(&s->kobj);
> }
> @@ -5770,7 +5756,6 @@ static int sysfs_slab_add(struct kmem_ca
> }
> #endif
>
> - kobject_uevent(&s->kobj, KOBJ_ADD);
> if (!unmergeable) {
> /* Setup first alias */
> sysfs_slab_alias(s, s->name);
> @@ -5851,7 +5836,7 @@ static int __init slab_sysfs_init(void)
>
> mutex_lock(&slab_mutex);
>
> - slab_kset = kset_create_and_add("slab", &slab_uevent_ops, kernel_kobj);
> + slab_kset = kset_create_and_add("slab", NULL, kernel_kobj);
> if (!slab_kset) {
> mutex_unlock(&slab_mutex);
> pr_err("Cannot register slab subsystem.\n");
>