2020-02-12 19:37:39

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

This adds a way for system administrators to limit which messages can be
seen on the interface. Currently cn_proc is rather noisy, and it sends a
lot of different messages which may not be needed.

At Cisco we need to receive the coredump messages, and no others. This
interface currently has no way to allow this. This patch provides a set
of bool module parameters to enable or disable each message type. The
parameters end up looking like this,

$ ls -al /sys/module/cn_proc/parameters/
total 0
drwxr-xr-x 2 root root 0 Feb 10 16:51 .
drwxr-xr-x 3 root root 0 Feb 10 16:50 ..
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_comm
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_coredump
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_exec
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_exit
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_fork
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_gid
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_none
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_ptrace
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_sid
-rw-r--r-- 1 root root 4096 Feb 10 16:51 enabled_uid

All messages are enabled by default. To disable one you can run the following,

echo N > /sys/module/cn_proc/parameters/enabled_comm

Signed-off-by: Daniel Walker <[email protected]>
---
drivers/connector/cn_proc.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index d58ce664da84..23b934ee9862 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -38,6 +38,22 @@ static inline struct cn_msg *buffer_to_cn_msg(__u8 *buffer)
static atomic_t proc_event_num_listeners = ATOMIC_INIT(0);
static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };

+#define CN_PROC_MSG_PARAM(name) \
+ static bool enabled_##name = true; \
+ module_param(enabled_##name, bool, 0644); \
+ MODULE_PARM_DESC(enabled_##name, "Enable message #name");
+
+CN_PROC_MSG_PARAM(none)
+CN_PROC_MSG_PARAM(fork)
+CN_PROC_MSG_PARAM(exec)
+CN_PROC_MSG_PARAM(uid)
+CN_PROC_MSG_PARAM(gid)
+CN_PROC_MSG_PARAM(sid)
+CN_PROC_MSG_PARAM(ptrace)
+CN_PROC_MSG_PARAM(comm)
+CN_PROC_MSG_PARAM(coredump)
+CN_PROC_MSG_PARAM(exit)
+
/* proc_event_counts is used as the sequence number of the netlink message */
static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };

@@ -66,6 +82,8 @@ void proc_fork_connector(struct task_struct *task)
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
struct task_struct *parent;

+ if (!enabled_fork)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -95,6 +113,8 @@ void proc_exec_connector(struct task_struct *task)
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_exec)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -120,6 +140,8 @@ void proc_id_connector(struct task_struct *task, int which_id)
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
const struct cred *cred;

+ if (!enabled_uid)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -157,6 +179,8 @@ void proc_sid_connector(struct task_struct *task)
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_sid)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -181,6 +205,8 @@ void proc_ptrace_connector(struct task_struct *task, int ptrace_id)
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_ptrace)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -213,6 +239,8 @@ void proc_comm_connector(struct task_struct *task)
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_comm)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -239,6 +267,8 @@ void proc_coredump_connector(struct task_struct *task)
struct task_struct *parent;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_coredump)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -272,6 +302,8 @@ void proc_exit_connector(struct task_struct *task)
struct task_struct *parent;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_exit)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

@@ -314,6 +346,8 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
struct proc_event *ev;
__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);

+ if (!enabled_none)
+ return;
if (atomic_read(&proc_event_num_listeners) < 1)
return;

--
2.17.1


2020-02-17 02:45:03

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages


This is a netlink based facility, therefore please you should add filtering
capabilities to the netlink configuration and communications path.

Module parameters are quite verboten.

2020-02-17 18:18:47

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

On Sun, Feb 16, 2020 at 06:44:43PM -0800, David Miller wrote:
>
> This is a netlink based facility, therefore please you should add filtering
> capabilities to the netlink configuration and communications path.
>
> Module parameters are quite verboten.

How about adding in Kconfig options to limit the types of messages? The issue
with this interface is that it's very easy for someone to enable the interface
as a listener, then never turn the interface off. Then it becomes a broadcast
interface. It's desirable to limit the more noisy messages in some cases.

Daniel

2020-02-17 18:23:53

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

On Mon, Feb 17, 2020 at 08:44:35PM +0300, Evgeniy Polyakov wrote:
> Hi Daniel, David
>
> 17.02.2020, 20:26, "Daniel Walker (danielwa)" <[email protected]>:
> > On Sun, Feb 16, 2020 at 06:44:43PM -0800, David Miller wrote:
> >> This is a netlink based facility, therefore please you should add
> filtering
> >> capabilities to the netlink configuration and communications path.
> >>
> >> Module parameters are quite verboten.
> >
> > How about adding in Kconfig options to limit the types of messages? The
> issue
> > with this interface is that it's very easy for someone to enable the
> interface
> > as a listener, then never turn the interface off. Then it becomes a
> broadcast
> > interface. It's desirable to limit the more noisy messages in some
> cases.
>
>
> Compile-time options are binary switches which live forever after kernel
> config has been created, its not gonna help those who enabled messages.
> Kernel modules are kind of no-go, since it requires reboot to change in
> some cases.
>
> Having netlink control from userspace is a nice option, and connector has
> simple userspace->kernelspace channel,
> but it requires additional userspace utils or programming, which is still
> cumbersome.
>
> What about sysfs interface with one file per message type?

You mean similar to the module parameters I've done, but thru sysfs ? It would
work for Cisco. I kind of like Kconfig because it also reduces kernel size for
messages you may never want to see.

Daniel

2020-02-17 18:33:56

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages



17.02.2020, 20:52, "Daniel Walker (danielwa)" <[email protected]>:
>>     What about sysfs interface with one file per message type?
>
> You mean similar to the module parameters I've done, but thru sysfs ? It would
> work for Cisco. I kind of like Kconfig because it also reduces kernel size for
> messages you may never want to see.

Yup, single sysfs file per message type you've created as kernel module parameter binary switches.

David, is it ok for you?

2020-02-18 02:52:50

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

From: "Daniel Walker (danielwa)" <[email protected]>
Date: Mon, 17 Feb 2020 17:25:57 +0000

> On Sun, Feb 16, 2020 at 06:44:43PM -0800, David Miller wrote:
>>
>> This is a netlink based facility, therefore please you should add filtering
>> capabilities to the netlink configuration and communications path.
>>
>> Module parameters are quite verboten.
>
> How about adding in Kconfig options to limit the types of messages?

Dynamic over compile time please.

2020-02-18 02:53:32

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

From: "Daniel Walker (danielwa)" <[email protected]>
Date: Mon, 17 Feb 2020 17:52:11 +0000

> On Mon, Feb 17, 2020 at 08:44:35PM +0300, Evgeniy Polyakov wrote:
>> Hi Daniel, David
>>
>> 17.02.2020, 20:26, "Daniel Walker (danielwa)" <[email protected]>:
>> > On Sun, Feb 16, 2020 at 06:44:43PM -0800, David Miller wrote:
>> >> This is a netlink based facility, therefore please you should add
>> filtering
>> >> capabilities to the netlink configuration and communications path.
>> >>
>> >> Module parameters are quite verboten.
>> >
>> > How about adding in Kconfig options to limit the types of messages? The
>> issue
>> > with this interface is that it's very easy for someone to enable the
>> interface
>> > as a listener, then never turn the interface off. Then it becomes a
>> broadcast
>> > interface. It's desirable to limit the more noisy messages in some
>> cases.
>>
>>
>> Compile-time options are binary switches which live forever after kernel
>> config has been created, its not gonna help those who enabled messages.
>> Kernel modules are kind of no-go, since it requires reboot to change in
>> some cases.
>>
>> Having netlink control from userspace is a nice option, and connector has
>> simple userspace->kernelspace channel,
>> but it requires additional userspace utils or programming, which is still
>> cumbersome.
>>
>> What about sysfs interface with one file per message type?
>
> You mean similar to the module parameters I've done, but thru sysfs ? It would
> work for Cisco. I kind of like Kconfig because it also reduces kernel size for
> messages you may never want to see.

Even the sysfs has major downsides, as it fails to take the socket context into
consideration and makes a system wide decision for what should be a per service
decision.

2020-02-18 16:31:32

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

On Mon, Feb 17, 2020 at 06:52:35PM -0800, David Miller wrote:
> From: "Daniel Walker (danielwa)" <[email protected]>
> Date: Mon, 17 Feb 2020 17:52:11 +0000
>
> > On Mon, Feb 17, 2020 at 08:44:35PM +0300, Evgeniy Polyakov wrote:
> >> Hi Daniel, David
> >>
> >> 17.02.2020, 20:26, "Daniel Walker (danielwa)" <[email protected]>:
> >> > On Sun, Feb 16, 2020 at 06:44:43PM -0800, David Miller wrote:
> >> >> This is a netlink based facility, therefore please you should add
> >> filtering
> >> >> capabilities to the netlink configuration and communications path.
> >> >>
> >> >> Module parameters are quite verboten.
> >> >
> >> > How about adding in Kconfig options to limit the types of messages? The
> >> issue
> >> > with this interface is that it's very easy for someone to enable the
> >> interface
> >> > as a listener, then never turn the interface off. Then it becomes a
> >> broadcast
> >> > interface. It's desirable to limit the more noisy messages in some
> >> cases.
> >>
> >>
> >> Compile-time options are binary switches which live forever after kernel
> >> config has been created, its not gonna help those who enabled messages.
> >> Kernel modules are kind of no-go, since it requires reboot to change in
> >> some cases.
> >>
> >> Having netlink control from userspace is a nice option, and connector has
> >> simple userspace->kernelspace channel,
> >> but it requires additional userspace utils or programming, which is still
> >> cumbersome.
> >>
> >> What about sysfs interface with one file per message type?
> >
> > You mean similar to the module parameters I've done, but thru sysfs ? It would
> > work for Cisco. I kind of like Kconfig because it also reduces kernel size for
> > messages you may never want to see.
>
> Even the sysfs has major downsides, as it fails to take the socket context into
> consideration and makes a system wide decision for what should be a per service
> decision.

It's multicast and essentially broadcast messages .. So everyone gets every
message, and once it's on it's likely it won't be turned off. Given that, It seems
appropriate that the system administrator has control of what messages if any
are sent, and it should effect all listening for messages.

I think I would agree with you if this was unicast, and each listener could tailor
what messages they want to get. However, this interface isn't that, and it would
be considerable work to convert to that.

Daniel

2020-02-18 16:53:35

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

18.02.2020, 19:30, "Daniel Walker (danielwa)" <[email protected]>:

> It's multicast and essentially broadcast messages .. So everyone gets every
> message, and once it's on it's likely it won't be turned off. Given that, It seems
> appropriate that the system administrator has control of what messages if any
> are sent, and it should effect all listening for messages.
>
> I think I would agree with you if this was unicast, and each listener could tailor
> what messages they want to get. However, this interface isn't that, and it would
> be considerable work to convert to that.

Connector has message/channel ids, you can implement this rate limiting scheme per user/socket.

This is probably not required if given cn_proc usecase - is it some central authority
which needs or doesn't need some messages? If so, it can not be bad to have a central switch.

But I also heard that container management tools are using this, in this case disabling some
things globally will suddenly break them.

2020-02-18 16:55:38

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

On Tue, Feb 18, 2020 at 07:46:12PM +0300, Evgeniy Polyakov wrote:
> 18.02.2020, 19:30, "Daniel Walker (danielwa)" <[email protected]>:
>
> > It's multicast and essentially broadcast messages .. So everyone gets every
> > message, and once it's on it's likely it won't be turned off. Given that, It seems
> > appropriate that the system administrator has control of what messages if any
> > are sent, and it should effect all listening for messages.
> >
> > I think I would agree with you if this was unicast, and each listener could tailor
> > what messages they want to get. However, this interface isn't that, and it would
> > be considerable work to convert to that.
>
> Connector has message/channel ids, you can implement this rate limiting scheme per user/socket.

I don't think I know enough about netlink to do this, but looking at it prior to
sending this patch it looked like a fair amount of work.

> This is probably not required if given cn_proc usecase - is it some central authority
> which needs or doesn't need some messages? If so, it can not be bad to have a central switch.
>
> But I also heard that container management tools are using this, in this case disabling some
> things globally will suddenly break them.

Cisco currently doesn't use this interface at all, the reason is that it's too
noisy and we get too many wake-ups. If we did use this interface there would be
one listener. I would think most embedded use cases would have one listener.

Do you know which container management tools are using this ?

Daniel

2020-02-18 20:36:19

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

From: "Daniel Walker (danielwa)" <[email protected]>
Date: Tue, 18 Feb 2020 16:30:36 +0000

> It's multicast and essentially broadcast messages .. So everyone gets every
> message, and once it's on it's likely it won't be turned off. Given that, It seems
> appropriate that the system administrator has control of what messages if any
> are sent, and it should effect all listening for messages.
>
> I think I would agree with you if this was unicast, and each listener could tailor
> what messages they want to get. However, this interface isn't that, and it would
> be considerable work to convert to that.

You filter at recvmsg() on the specific socket, multicast or not, I
don't understand what the issue is.

2020-02-18 20:55:34

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

On Tue, Feb 18, 2020 at 12:35:46PM -0800, David Miller wrote:
> From: "Daniel Walker (danielwa)" <[email protected]>
> Date: Tue, 18 Feb 2020 16:30:36 +0000
>
> > It's multicast and essentially broadcast messages .. So everyone gets every
> > message, and once it's on it's likely it won't be turned off. Given that, It seems
> > appropriate that the system administrator has control of what messages if any
> > are sent, and it should effect all listening for messages.
> >
> > I think I would agree with you if this was unicast, and each listener could tailor
> > what messages they want to get. However, this interface isn't that, and it would
> > be considerable work to convert to that.
>
> You filter at recvmsg() on the specific socket, multicast or not, I
> don't understand what the issue is.

Cisco tried something like this (I don't know if it was exactly what your referring to),
and it was messy and fairly complicated for a simple interface. In fact it was
the first thing I suggested for Cisco.

I'm not sure why Connector has to supply an exact set of messages, one could
just make a whole new kernel module hooked into netlink sending a different
subset of connector messages. The interface eats up CPU and slows the
system if it's sending messages your just going to ignore. I'm sure the
filtering would also slows down the system.

Daniel

2020-02-19 01:20:17

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

18.02.2020, 23:55, "Daniel Walker (danielwa)" <[email protected]>:
>>  > I think I would agree with you if this was unicast, and each listener could tailor
>>  > what messages they want to get. However, this interface isn't that, and it would
>>  > be considerable work to convert to that.
>>
>>  You filter at recvmsg() on the specific socket, multicast or not, I
>>  don't understand what the issue is.
>
> Cisco tried something like this (I don't know if it was exactly what your referring to),
> and it was messy and fairly complicated for a simple interface. In fact it was
> the first thing I suggested for Cisco.
>
> I'm not sure why Connector has to supply an exact set of messages, one could
> just make a whole new kernel module hooked into netlink sending a different
> subset of connector messages. The interface eats up CPU and slows the
> system if it's sending messages your just going to ignore. I'm sure the
> filtering would also slows down the system.

Connector has unicast interface and multicast-like 'subscription', but sending system-wide messages
implies using broadcast interface, since you can not hold per-user/per-socket information about particular
event mask, instead you have channels in connector each one could have been used for specific message type,
but it looks overkill for simple process mask changes.

And in fact, now I do not understand your point.
I thought you have been concerned about receiving too many messages from particular connector module because
there are, for example, too many 'fork/signal' events. And now you want to limit them to 'fork' events only.
Even if there could be other users who wanted to receive 'signal' and other events.

And you blame connector - basically a network media, call it TCP if you like - for not filtering this for you?
And after you have been told to use connector channels - let's call them TCP ports -
which requires quite a bit of work - you do not want to do this (also, this will break backward compatibility for everyone
else including (!) Cisco (!!)). I'm a little bit lost here.

As a side and more practical way - do we want to have a global switch for particular process state changes broadcasting?

2020-02-19 15:38:05

by Daniel Walker (danielwa)

[permalink] [raw]
Subject: Re: [PATCH] drivers: connector: cn_proc: allow limiting certain messages

On Wed, Feb 19, 2020 at 04:19:36AM +0300, Evgeniy Polyakov wrote:
> 18.02.2020, 23:55, "Daniel Walker (danielwa)" <[email protected]>:
> >> ?> I think I would agree with you if this was unicast, and each listener could tailor
> >> ?> what messages they want to get. However, this interface isn't that, and it would
> >> ?> be considerable work to convert to that.
> >>
> >> ?You filter at recvmsg() on the specific socket, multicast or not, I
> >> ?don't understand what the issue is.
> >
> > Cisco tried something like this (I don't know if it was exactly what your referring to),
> > and it was messy and fairly complicated for a simple interface. In fact it was
> > the first thing I suggested for Cisco.
> >
> > I'm not sure why Connector has to supply an exact set of messages, one could
> > just make a whole new kernel module hooked into netlink sending a different
> > subset of connector messages. The interface eats up CPU and slows the
> > system if it's sending messages your just going to ignore. I'm sure the
> > filtering would also slows down the system.
>
> Connector has unicast interface and multicast-like 'subscription', but sending system-wide messages
> implies using broadcast interface, since you can not hold per-user/per-socket information about particular
> event mask, instead you have channels in connector each one could have been used for specific message type,
> but it looks overkill for simple process mask changes.
>
> And in fact, now I do not understand your point.
> I thought you have been concerned about receiving too many messages from particular connector module because
> there are, for example, too many 'fork/signal' events. And now you want to limit them to 'fork' events only.
> Even if there could be other users who wanted to receive 'signal' and other events.

This is what I'm looking for, except not fork.

> And you blame connector - basically a network media, call it TCP if you like - for not filtering this for you?
> And after you have been told to use connector channels - let's call them TCP ports -
> which requires quite a bit of work - you do not want to do this (also, this will break backward compatibility for everyone
> else including (!) Cisco (!!)). I'm a little bit lost here.

Maybe I'm confusing connector with cn_proc. Of course I've modified cn_proc, and
that's all I'm concern with. If Connector is a larger entity for tranmission I'm
not concerned with that.

To be honest, I'm not sure where you confusion is coming from. My original patch
is what I want, and what i need, and what we're discussing. If David suggested
something I didn't understand, then maybe we discussing something from two
different perspectives.

> As a side and more practical way - do we want to have a global switch for particular process state changes broadcasting?

I think it would depend if it's likely to have multiple processes listening.
Cisco would likely have one process, but there could be a case with containers
tools where there multiple listeners. I don't know how the containers tools are
using this interface.

Daniel