2022-11-23 10:31:36

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 2


This is the second set of patches in the process of moving rxrpc from doing
a lot of its stuff in softirq context to doing it in an I/O thread in
process context and thereby making it easier to support a larger SACK table
(full description in part 1[1]).

[!] Note that these patches are based on a merge of a fix in net/master
with net-next/master. The fix makes a number of conflicting changes,
so it's better if this set is built on top of it.

This set of patches includes some cleanups, adds some testing and overhauls
some tracing:

(1) Remove declaration of rxrpc_kernel_call_is_complete() as the
definition is no longer present.

(2) Remove the knet() and kproto() macros in favour of using tracepoints.

(3) Remove handling of duplicate packets from recvmsg. The input side
isn't now going to insert overlapping/duplicate packets into the
recvmsg queue.

(4) Don't use the rxrpc_conn_parameters struct in the rxrpc_connection or
rxrpc_bundle structs - rather put the members in directly.

(5) Extract the abort code from a received abort packet right up front
rather than doing it in multiple places later.

(6) Use enums and symbol lists rather than __builtin_return_address() to
indicate where a tracepoint was triggered for local, peer, conn, call
and skbuff tracing.

(7) Add a refcount tracepoint for the rxrpc_bundle struct.

(8) Implement an in-kernel server for the AFS rxperf testing program to
talk to (enabled by a Kconfig option).

Link: https://lore.kernel.org/r/166794587113.2389296.16484814996876530222.stgit@warthog.procyon.org.uk/ [1]

---
The patches are tagged here:

git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git tags/rxrpc-next-20221121

And can be found on this branch:

http://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git/log/?h=rxrpc-next

David
---
David Howells (13):
rxrpc: Implement an in-kernel rxperf server for testing purposes
rxrpc: Remove decl for rxrpc_kernel_call_is_complete()
rxrpc: Remove handling of duplicate packets in recvmsg_queue
rxrpc: Remove the [k_]proto() debugging macros
rxrpc: Remove the [_k]net() debugging macros
rxrpc: Drop rxrpc_conn_parameters from rxrpc_connection and rxrpc_bundle
rxrpc: Extract the code from a received ABORT packet much earlier
rxrpc: trace: Don't use __builtin_return_address for rxrpc_local tracing
rxrpc: trace: Don't use __builtin_return_address for rxrpc_peer tracing
rxrpc: trace: Don't use __builtin_return_address for rxrpc_conn tracing
rxrpc: trace: Don't use __builtin_return_address for rxrpc_call tracing
rxrpc: Trace rxrpc_bundle refcount
rxrpc: trace: Don't use __builtin_return_address for sk_buff tracing


include/net/af_rxrpc.h | 2 +-
include/trace/events/rxrpc.h | 385 +++++++++++++++-------
net/rxrpc/Kconfig | 7 +
net/rxrpc/Makefile | 3 +
net/rxrpc/af_rxrpc.c | 10 +-
net/rxrpc/ar-internal.h | 121 ++++---
net/rxrpc/call_accept.c | 39 +--
net/rxrpc/call_event.c | 18 +-
net/rxrpc/call_object.c | 120 +++----
net/rxrpc/conn_client.c | 112 ++++---
net/rxrpc/conn_event.c | 52 ++-
net/rxrpc/conn_object.c | 66 ++--
net/rxrpc/conn_service.c | 15 +-
net/rxrpc/input.c | 103 +++---
net/rxrpc/key.c | 2 +-
net/rxrpc/local_event.c | 7 +-
net/rxrpc/local_object.c | 85 +++--
net/rxrpc/output.c | 45 ++-
net/rxrpc/peer_event.c | 74 +----
net/rxrpc/peer_object.c | 44 ++-
net/rxrpc/proc.c | 6 +-
net/rxrpc/recvmsg.c | 32 +-
net/rxrpc/rxkad.c | 63 ++--
net/rxrpc/rxperf.c | 614 +++++++++++++++++++++++++++++++++++
net/rxrpc/security.c | 4 +-
net/rxrpc/sendmsg.c | 6 +-
net/rxrpc/server_key.c | 25 ++
net/rxrpc/skbuff.c | 36 +-
28 files changed, 1403 insertions(+), 693 deletions(-)
create mode 100644 net/rxrpc/rxperf.c



2022-11-23 10:31:40

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 01/13] rxrpc: Implement an in-kernel rxperf server for testing purposes

Implement an in-kernel rxperf server to allow kernel-based rxrpc services
to be tested directly, unlike with AFS where they're accessed by the
fileserver when the latter decides it wants to.

This is implemented as a module that, if loaded, opens UDP port 7009
(afs3-rmtsys) and listens on it for incoming calls. Calls can be generated
using the rxperf command shipped with OpenAFS, for example.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/net/af_rxrpc.h | 1
net/rxrpc/Kconfig | 7 +
net/rxrpc/Makefile | 3
net/rxrpc/rxperf.c | 614 ++++++++++++++++++++++++++++++++++++++++++++++++
net/rxrpc/server_key.c | 25 ++
5 files changed, 650 insertions(+)
create mode 100644 net/rxrpc/rxperf.c

diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
index b69ca695935c..dc033f08191e 100644
--- a/include/net/af_rxrpc.h
+++ b/include/net/af_rxrpc.h
@@ -71,5 +71,6 @@ void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *,
unsigned long);

int rxrpc_sock_set_min_security_level(struct sock *sk, unsigned int val);
+int rxrpc_sock_set_security_keyring(struct sock *, struct key *);

#endif /* _NET_RXRPC_H */
diff --git a/net/rxrpc/Kconfig b/net/rxrpc/Kconfig
index accd35c05577..7ae023b37a83 100644
--- a/net/rxrpc/Kconfig
+++ b/net/rxrpc/Kconfig
@@ -58,4 +58,11 @@ config RXKAD

See Documentation/networking/rxrpc.rst.

+config RXPERF
+ tristate "RxRPC test service"
+ help
+ Provide an rxperf service tester. This listens on UDP port 7009 for
+ incoming calls from the rxperf program (an example of which can be
+ found in OpenAFS).
+
endif
diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile
index fdeba488fc6e..79687477d93c 100644
--- a/net/rxrpc/Makefile
+++ b/net/rxrpc/Makefile
@@ -36,3 +36,6 @@ rxrpc-y := \
rxrpc-$(CONFIG_PROC_FS) += proc.o
rxrpc-$(CONFIG_RXKAD) += rxkad.o
rxrpc-$(CONFIG_SYSCTL) += sysctl.o
+
+
+obj-$(CONFIG_RXPERF) += rxperf.o
diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c
new file mode 100644
index 000000000000..7f8a1a186da8
--- /dev/null
+++ b/net/rxrpc/rxperf.c
@@ -0,0 +1,614 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* In-kernel rxperf server for testing purposes.
+ *
+ * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells ([email protected])
+ */
+
+#define pr_fmt(fmt) "rxperf: " fmt
+#include <linux/slab.h>
+#include <net/sock.h>
+//#include <net/netns/generic.h>
+#include <net/af_rxrpc.h>
+
+#define RXPERF_PORT 7009
+#define RX_PERF_SERVICE 147
+#define RX_PERF_VERSION 3
+#define RX_PERF_SEND 0
+#define RX_PERF_RECV 1
+#define RX_PERF_RPC 3
+#define RX_PERF_FILE 4
+#define RX_PERF_MAGIC_COOKIE 0x4711
+
+struct rxperf_proto_params {
+ __be32 version;
+ __be32 type;
+ __be32 rsize;
+ __be32 wsize;
+} __packed;
+
+static const u8 rxperf_magic_cookie[] = { 0x00, 0x00, 0x47, 0x11 };
+static const u8 secret[8] = { 0xa7, 0x83, 0x8a, 0xcb, 0xc7, 0x83, 0xec, 0x94 };
+
+enum rxperf_call_state {
+ RXPERF_CALL_SV_AWAIT_PARAMS, /* Server: Awaiting parameter block */
+ RXPERF_CALL_SV_AWAIT_REQUEST, /* Server: Awaiting request data */
+ RXPERF_CALL_SV_REPLYING, /* Server: Replying */
+ RXPERF_CALL_SV_AWAIT_ACK, /* Server: Awaiting final ACK */
+ RXPERF_CALL_COMPLETE, /* Completed or failed */
+};
+
+struct rxperf_call {
+ struct rxrpc_call *rxcall;
+ struct iov_iter iter;
+ struct kvec kvec[1];
+ struct work_struct work;
+ const char *type;
+ size_t iov_len;
+ size_t req_len; /* Size of request blob */
+ size_t reply_len; /* Size of reply blob */
+ unsigned int debug_id;
+ unsigned int operation_id;
+ struct rxperf_proto_params params;
+ __be32 tmp[2];
+ s32 abort_code;
+ enum rxperf_call_state state;
+ short error;
+ unsigned short unmarshal;
+ u16 service_id;
+ int (*deliver)(struct rxperf_call *call);
+ void (*processor)(struct work_struct *work);
+};
+
+static struct socket *rxperf_socket;
+static struct key *rxperf_sec_keyring; /* Ring of security/crypto keys */
+static struct workqueue_struct *rxperf_workqueue;
+
+static void rxperf_deliver_to_call(struct work_struct *work);
+static int rxperf_deliver_param_block(struct rxperf_call *call);
+static int rxperf_deliver_request(struct rxperf_call *call);
+static int rxperf_process_call(struct rxperf_call *call);
+static void rxperf_charge_preallocation(struct work_struct *work);
+
+static DECLARE_WORK(rxperf_charge_preallocation_work,
+ rxperf_charge_preallocation);
+
+static inline void rxperf_set_call_state(struct rxperf_call *call,
+ enum rxperf_call_state to)
+{
+ call->state = to;
+}
+
+static inline void rxperf_set_call_complete(struct rxperf_call *call,
+ int error, s32 remote_abort)
+{
+ if (call->state != RXPERF_CALL_COMPLETE) {
+ call->abort_code = remote_abort;
+ call->error = error;
+ call->state = RXPERF_CALL_COMPLETE;
+ }
+}
+
+static void rxperf_rx_discard_new_call(struct rxrpc_call *rxcall,
+ unsigned long user_call_ID)
+{
+ kfree((struct rxperf_call *)user_call_ID);
+}
+
+static void rxperf_rx_new_call(struct sock *sk, struct rxrpc_call *rxcall,
+ unsigned long user_call_ID)
+{
+ queue_work(rxperf_workqueue, &rxperf_charge_preallocation_work);
+}
+
+static void rxperf_queue_call_work(struct rxperf_call *call)
+{
+ queue_work(rxperf_workqueue, &call->work);
+}
+
+static void rxperf_notify_rx(struct sock *sk, struct rxrpc_call *rxcall,
+ unsigned long call_user_ID)
+{
+ struct rxperf_call *call = (struct rxperf_call *)call_user_ID;
+
+ if (call->state != RXPERF_CALL_COMPLETE)
+ rxperf_queue_call_work(call);
+}
+
+static void rxperf_rx_attach(struct rxrpc_call *rxcall, unsigned long user_call_ID)
+{
+ struct rxperf_call *call = (struct rxperf_call *)user_call_ID;
+
+ call->rxcall = rxcall;
+}
+
+static void rxperf_notify_end_reply_tx(struct sock *sock,
+ struct rxrpc_call *rxcall,
+ unsigned long call_user_ID)
+{
+ rxperf_set_call_state((struct rxperf_call *)call_user_ID,
+ RXPERF_CALL_SV_AWAIT_ACK);
+}
+
+/*
+ * Charge the incoming call preallocation.
+ */
+static void rxperf_charge_preallocation(struct work_struct *work)
+{
+ struct rxperf_call *call;
+
+ for (;;) {
+ call = kzalloc(sizeof(*call), GFP_KERNEL);
+ if (!call)
+ break;
+
+ call->type = "unset";
+ call->debug_id = atomic_inc_return(&rxrpc_debug_id);
+ call->deliver = rxperf_deliver_param_block;
+ call->state = RXPERF_CALL_SV_AWAIT_PARAMS;
+ call->service_id = RX_PERF_SERVICE;
+ call->iov_len = sizeof(call->params);
+ call->kvec[0].iov_len = sizeof(call->params);
+ call->kvec[0].iov_base = &call->params;
+ iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
+ INIT_WORK(&call->work, rxperf_deliver_to_call);
+
+ if (rxrpc_kernel_charge_accept(rxperf_socket,
+ rxperf_notify_rx,
+ rxperf_rx_attach,
+ (unsigned long)call,
+ GFP_KERNEL,
+ call->debug_id) < 0)
+ break;
+ call = NULL;
+ }
+
+ kfree(call);
+}
+
+/*
+ * Open an rxrpc socket and bind it to be a server for callback notifications
+ * - the socket is left in blocking mode and non-blocking ops use MSG_DONTWAIT
+ */
+static int rxperf_open_socket(void)
+{
+ struct sockaddr_rxrpc srx;
+ struct socket *socket;
+ int ret;
+
+ ret = sock_create_kern(&init_net, AF_RXRPC, SOCK_DGRAM, PF_INET6,
+ &socket);
+ if (ret < 0)
+ goto error_1;
+
+ socket->sk->sk_allocation = GFP_NOFS;
+
+ /* bind the callback manager's address to make this a server socket */
+ memset(&srx, 0, sizeof(srx));
+ srx.srx_family = AF_RXRPC;
+ srx.srx_service = RX_PERF_SERVICE;
+ srx.transport_type = SOCK_DGRAM;
+ srx.transport_len = sizeof(srx.transport.sin6);
+ srx.transport.sin6.sin6_family = AF_INET6;
+ srx.transport.sin6.sin6_port = htons(RXPERF_PORT);
+
+ ret = rxrpc_sock_set_min_security_level(socket->sk,
+ RXRPC_SECURITY_ENCRYPT);
+ if (ret < 0)
+ goto error_2;
+
+ ret = rxrpc_sock_set_security_keyring(socket->sk, rxperf_sec_keyring);
+
+ ret = kernel_bind(socket, (struct sockaddr *)&srx, sizeof(srx));
+ if (ret < 0)
+ goto error_2;
+
+ rxrpc_kernel_new_call_notification(socket, rxperf_rx_new_call,
+ rxperf_rx_discard_new_call);
+
+ ret = kernel_listen(socket, INT_MAX);
+ if (ret < 0)
+ goto error_2;
+
+ rxperf_socket = socket;
+ rxperf_charge_preallocation(&rxperf_charge_preallocation_work);
+ return 0;
+
+error_2:
+ sock_release(socket);
+error_1:
+ pr_err("Can't set up rxperf socket: %d\n", ret);
+ return ret;
+}
+
+/*
+ * close the rxrpc socket rxperf was using
+ */
+static void rxperf_close_socket(void)
+{
+ kernel_listen(rxperf_socket, 0);
+ kernel_sock_shutdown(rxperf_socket, SHUT_RDWR);
+ flush_workqueue(rxperf_workqueue);
+ sock_release(rxperf_socket);
+}
+
+/*
+ * Log remote abort codes that indicate that we have a protocol disagreement
+ * with the server.
+ */
+static void rxperf_log_error(struct rxperf_call *call, s32 remote_abort)
+{
+ static int max = 0;
+ const char *msg;
+ int m;
+
+ switch (remote_abort) {
+ case RX_EOF: msg = "unexpected EOF"; break;
+ case RXGEN_CC_MARSHAL: msg = "client marshalling"; break;
+ case RXGEN_CC_UNMARSHAL: msg = "client unmarshalling"; break;
+ case RXGEN_SS_MARSHAL: msg = "server marshalling"; break;
+ case RXGEN_SS_UNMARSHAL: msg = "server unmarshalling"; break;
+ case RXGEN_DECODE: msg = "opcode decode"; break;
+ case RXGEN_SS_XDRFREE: msg = "server XDR cleanup"; break;
+ case RXGEN_CC_XDRFREE: msg = "client XDR cleanup"; break;
+ case -32: msg = "insufficient data"; break;
+ default:
+ return;
+ }
+
+ m = max;
+ if (m < 3) {
+ max = m + 1;
+ pr_info("Peer reported %s failure on %s\n", msg, call->type);
+ }
+}
+
+/*
+ * deliver messages to a call
+ */
+static void rxperf_deliver_to_call(struct work_struct *work)
+{
+ struct rxperf_call *call = container_of(work, struct rxperf_call, work);
+ enum rxperf_call_state state;
+ u32 abort_code, remote_abort = 0;
+ int ret;
+
+ if (call->state == RXPERF_CALL_COMPLETE)
+ return;
+
+ while (state = call->state,
+ state == RXPERF_CALL_SV_AWAIT_PARAMS ||
+ state == RXPERF_CALL_SV_AWAIT_REQUEST ||
+ state == RXPERF_CALL_SV_AWAIT_ACK
+ ) {
+ if (state == RXPERF_CALL_SV_AWAIT_ACK) {
+ if (!rxrpc_kernel_check_life(rxperf_socket, call->rxcall))
+ goto call_complete;
+ return;
+ }
+
+ ret = call->deliver(call);
+ if (ret == 0)
+ ret = rxperf_process_call(call);
+
+ switch (ret) {
+ case 0:
+ continue;
+ case -EINPROGRESS:
+ case -EAGAIN:
+ return;
+ case -ECONNABORTED:
+ rxperf_log_error(call, call->abort_code);
+ goto call_complete;
+ case -EOPNOTSUPP:
+ abort_code = RXGEN_OPCODE;
+ rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
+ abort_code, ret, "GOP");
+ goto call_complete;
+ case -ENOTSUPP:
+ abort_code = RX_USER_ABORT;
+ rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
+ abort_code, ret, "GUA");
+ goto call_complete;
+ case -EIO:
+ pr_err("Call %u in bad state %u\n",
+ call->debug_id, call->state);
+ fallthrough;
+ case -ENODATA:
+ case -EBADMSG:
+ case -EMSGSIZE:
+ case -ENOMEM:
+ case -EFAULT:
+ rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
+ RXGEN_SS_UNMARSHAL, ret, "GUM");
+ goto call_complete;
+ default:
+ rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
+ RX_CALL_DEAD, ret, "GER");
+ goto call_complete;
+ }
+ }
+
+call_complete:
+ rxperf_set_call_complete(call, ret, remote_abort);
+ /* The call may have been requeued */
+ rxrpc_kernel_end_call(rxperf_socket, call->rxcall);
+ cancel_work(&call->work);
+ kfree(call);
+}
+
+/*
+ * Extract a piece of data from the received data socket buffers.
+ */
+static int rxperf_extract_data(struct rxperf_call *call, bool want_more)
+{
+ u32 remote_abort = 0;
+ int ret;
+
+ ret = rxrpc_kernel_recv_data(rxperf_socket, call->rxcall, &call->iter,
+ &call->iov_len, want_more, &remote_abort,
+ &call->service_id);
+ pr_debug("Extract i=%zu l=%zu m=%u ret=%d\n",
+ iov_iter_count(&call->iter), call->iov_len, want_more, ret);
+ if (ret == 0 || ret == -EAGAIN)
+ return ret;
+
+ if (ret == 1) {
+ switch (call->state) {
+ case RXPERF_CALL_SV_AWAIT_REQUEST:
+ rxperf_set_call_state(call, RXPERF_CALL_SV_REPLYING);
+ break;
+ case RXPERF_CALL_COMPLETE:
+ pr_debug("premature completion %d", call->error);
+ return call->error;
+ default:
+ break;
+ }
+ return 0;
+ }
+
+ rxperf_set_call_complete(call, ret, remote_abort);
+ return ret;
+}
+
+/*
+ * Grab the operation ID from an incoming manager call.
+ */
+static int rxperf_deliver_param_block(struct rxperf_call *call)
+{
+ u32 version;
+ int ret;
+
+ /* Extract the parameter block */
+ ret = rxperf_extract_data(call, true);
+ if (ret < 0)
+ return ret;
+
+ version = ntohl(call->params.version);
+ call->operation_id = ntohl(call->params.type);
+ call->deliver = rxperf_deliver_request;
+
+ if (version != RX_PERF_VERSION) {
+ pr_info("Version mismatch %x\n", version);
+ return -ENOTSUPP;
+ }
+
+ switch (call->operation_id) {
+ case RX_PERF_SEND:
+ call->type = "send";
+ call->reply_len = 0;
+ call->iov_len = 4; /* Expect req size */
+ break;
+ case RX_PERF_RECV:
+ call->type = "recv";
+ call->req_len = 0;
+ call->iov_len = 4; /* Expect reply size */
+ break;
+ case RX_PERF_RPC:
+ call->type = "rpc";
+ call->iov_len = 8; /* Expect req size and reply size */
+ break;
+ case RX_PERF_FILE:
+ call->type = "file";
+ fallthrough;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ rxperf_set_call_state(call, RXPERF_CALL_SV_AWAIT_REQUEST);
+ return call->deliver(call);
+}
+
+/*
+ * Deliver the request data.
+ */
+static int rxperf_deliver_request(struct rxperf_call *call)
+{
+ int ret;
+
+ switch (call->unmarshal) {
+ case 0:
+ call->kvec[0].iov_len = call->iov_len;
+ call->kvec[0].iov_base = call->tmp;
+ iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
+ call->unmarshal++;
+ fallthrough;
+ case 1:
+ ret = rxperf_extract_data(call, true);
+ if (ret < 0)
+ return ret;
+
+ switch (call->operation_id) {
+ case RX_PERF_SEND:
+ call->type = "send";
+ call->req_len = ntohl(call->tmp[0]);
+ call->reply_len = 0;
+ break;
+ case RX_PERF_RECV:
+ call->type = "recv";
+ call->req_len = 0;
+ call->reply_len = ntohl(call->tmp[0]);
+ break;
+ case RX_PERF_RPC:
+ call->type = "rpc";
+ call->req_len = ntohl(call->tmp[0]);
+ call->reply_len = ntohl(call->tmp[1]);
+ break;
+ default:
+ pr_info("Can't parse extra params\n");
+ return -EIO;
+ }
+
+ pr_debug("CALL op=%s rq=%zx rp=%zx\n",
+ call->type, call->req_len, call->reply_len);
+
+ call->iov_len = call->req_len;
+ iov_iter_discard(&call->iter, READ, call->req_len);
+ call->unmarshal++;
+ fallthrough;
+ case 2:
+ ret = rxperf_extract_data(call, false);
+ if (ret < 0)
+ return ret;
+ call->unmarshal++;
+ fallthrough;
+ default:
+ return 0;
+ }
+}
+
+/*
+ * Process a call for which we've received the request.
+ */
+static int rxperf_process_call(struct rxperf_call *call)
+{
+ struct msghdr msg = {};
+ struct bio_vec bv[1];
+ struct kvec iov[1];
+ ssize_t n;
+ size_t reply_len = call->reply_len, len;
+
+ rxrpc_kernel_set_tx_length(rxperf_socket, call->rxcall,
+ reply_len + sizeof(rxperf_magic_cookie));
+
+ while (reply_len > 0) {
+ len = min(reply_len, PAGE_SIZE);
+ bv[0].bv_page = ZERO_PAGE(0);
+ bv[0].bv_offset = 0;
+ bv[0].bv_len = len;
+ iov_iter_bvec(&msg.msg_iter, WRITE, bv, 1, len);
+ msg.msg_flags = MSG_MORE;
+ n = rxrpc_kernel_send_data(rxperf_socket, call->rxcall, &msg,
+ len, rxperf_notify_end_reply_tx);
+ if (n < 0)
+ return n;
+ if (n == 0)
+ return -EIO;
+ reply_len -= n;
+ }
+
+ len = sizeof(rxperf_magic_cookie);
+ iov[0].iov_base = (void *)rxperf_magic_cookie;
+ iov[0].iov_len = len;
+ iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len);
+ msg.msg_flags = 0;
+ n = rxrpc_kernel_send_data(rxperf_socket, call->rxcall, &msg, len,
+ rxperf_notify_end_reply_tx);
+ if (n >= 0)
+ return 0; /* Success */
+
+ if (n == -ENOMEM)
+ rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
+ RXGEN_SS_MARSHAL, -ENOMEM, "GOM");
+ return n;
+}
+
+/*
+ * Add a key to the security keyring.
+ */
+static int rxperf_add_key(struct key *keyring)
+{
+ key_ref_t kref;
+ int ret;
+
+ kref = key_create_or_update(make_key_ref(keyring, true),
+ "rxrpc_s",
+ __stringify(RX_PERF_SERVICE) ":2",
+ secret,
+ sizeof(secret),
+ KEY_POS_VIEW | KEY_POS_READ | KEY_POS_SEARCH
+ | KEY_USR_VIEW,
+ KEY_ALLOC_NOT_IN_QUOTA);
+
+ if (IS_ERR(kref)) {
+ pr_err("Can't allocate rxperf server key: %ld\n", PTR_ERR(kref));
+ return PTR_ERR(kref);
+ }
+
+ ret = key_link(keyring, key_ref_to_ptr(kref));
+ if (ret < 0)
+ pr_err("Can't link rxperf server key: %d\n", ret);
+ key_ref_put(kref);
+ return ret;
+}
+
+/*
+ * Initialise the rxperf server.
+ */
+static int __init rxperf_init(void)
+{
+ struct key *keyring;
+ int ret = -ENOMEM;
+
+ pr_info("Server registering\n");
+
+ rxperf_workqueue = alloc_workqueue("rxperf", 0, 0);
+ if (!rxperf_workqueue)
+ goto error_workqueue;
+
+ keyring = keyring_alloc("rxperf_server",
+ GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(),
+ KEY_POS_VIEW | KEY_POS_READ | KEY_POS_SEARCH |
+ KEY_POS_WRITE |
+ KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH |
+ KEY_USR_WRITE |
+ KEY_OTH_VIEW | KEY_OTH_READ | KEY_OTH_SEARCH,
+ KEY_ALLOC_NOT_IN_QUOTA,
+ NULL, NULL);
+ if (IS_ERR(keyring)) {
+ pr_err("Can't allocate rxperf server keyring: %ld\n",
+ PTR_ERR(keyring));
+ goto error_keyring;
+ }
+ rxperf_sec_keyring = keyring;
+ ret = rxperf_add_key(keyring);
+ if (ret < 0)
+ goto error_key;
+
+ ret = rxperf_open_socket();
+ if (ret < 0)
+ goto error_socket;
+ return 0;
+
+error_socket:
+error_key:
+ key_put(rxperf_sec_keyring);
+error_keyring:
+ destroy_workqueue(rxperf_workqueue);
+ rcu_barrier();
+error_workqueue:
+ pr_err("Failed to register: %d\n", ret);
+ return ret;
+}
+late_initcall(rxperf_init); /* Must be called after net/ to create socket */
+
+static void __exit rxperf_exit(void)
+{
+ pr_info("Server unregistering.\n");
+
+ rxperf_close_socket();
+ key_put(rxperf_sec_keyring);
+ destroy_workqueue(rxperf_workqueue);
+ rcu_barrier();
+}
+module_exit(rxperf_exit);
diff --git a/net/rxrpc/server_key.c b/net/rxrpc/server_key.c
index ee269e0e6ee8..e51940589ee5 100644
--- a/net/rxrpc/server_key.c
+++ b/net/rxrpc/server_key.c
@@ -144,3 +144,28 @@ int rxrpc_server_keyring(struct rxrpc_sock *rx, sockptr_t optval, int optlen)
_leave(" = 0 [key %x]", key->serial);
return 0;
}
+
+/**
+ * rxrpc_sock_set_security_keyring - Set the security keyring for a kernel service
+ * @sk: The socket to set the keyring on
+ * @keyring: The keyring to set
+ *
+ * Set the server security keyring on an rxrpc socket. This is used to provide
+ * the encryption keys for a kernel service.
+ */
+int rxrpc_sock_set_security_keyring(struct sock *sk, struct key *keyring)
+{
+ struct rxrpc_sock *rx = rxrpc_sk(sk);
+ int ret = 0;
+
+ lock_sock(sk);
+ if (rx->securities)
+ ret = -EINVAL;
+ else if (rx->sk.sk_state != RXRPC_UNBOUND)
+ ret = -EISCONN;
+ else
+ rx->securities = key_get(keyring);
+ release_sock(sk);
+ return ret;
+}
+EXPORT_SYMBOL(rxrpc_sock_set_security_keyring);


2022-11-23 10:32:31

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 09/13] rxrpc: trace: Don't use __builtin_return_address for rxrpc_peer tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_peer tracepoint

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 43 +++++++++++++++++++++++++-----------------
net/rxrpc/af_rxrpc.c | 2 +-
net/rxrpc/ar-internal.h | 11 ++++++-----
net/rxrpc/call_accept.c | 8 +++++---
net/rxrpc/call_object.c | 2 +-
net/rxrpc/conn_client.c | 8 ++++----
net/rxrpc/conn_object.c | 2 +-
net/rxrpc/peer_event.c | 8 ++++----
net/rxrpc/peer_object.c | 34 ++++++++++++++++-----------------
net/rxrpc/sendmsg.c | 2 +-
10 files changed, 65 insertions(+), 55 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index 015569845b1d..1c74143a51c1 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -63,10 +63,23 @@
E_(rxrpc_local_use_work, "USE work ")

#define rxrpc_peer_traces \
- EM(rxrpc_peer_got, "GOT") \
- EM(rxrpc_peer_new, "NEW") \
- EM(rxrpc_peer_processing, "PRO") \
- E_(rxrpc_peer_put, "PUT")
+ EM(rxrpc_peer_free, "FREE ") \
+ EM(rxrpc_peer_get_accept, "GET accept ") \
+ EM(rxrpc_peer_get_activate_call, "GET act-call") \
+ EM(rxrpc_peer_get_bundle, "GET bundle ") \
+ EM(rxrpc_peer_get_client_conn, "GET cln-conn") \
+ EM(rxrpc_peer_get_input_error, "GET inpt-err") \
+ EM(rxrpc_peer_get_keepalive, "GET keepaliv") \
+ EM(rxrpc_peer_get_lookup_client, "GET look-cln") \
+ EM(rxrpc_peer_get_service_conn, "GET srv-conn") \
+ EM(rxrpc_peer_new_client, "NEW client ") \
+ EM(rxrpc_peer_new_prealloc, "NEW prealloc") \
+ EM(rxrpc_peer_put_bundle, "PUT bundle ") \
+ EM(rxrpc_peer_put_call, "PUT call ") \
+ EM(rxrpc_peer_put_conn, "PUT conn ") \
+ EM(rxrpc_peer_put_discard_tmp, "PUT disc-tmp") \
+ EM(rxrpc_peer_put_input_error, "PUT inpt-err") \
+ E_(rxrpc_peer_put_keepalive, "PUT keepaliv")

#define rxrpc_conn_traces \
EM(rxrpc_conn_got, "GOT") \
@@ -394,30 +407,26 @@ TRACE_EVENT(rxrpc_local,
);

TRACE_EVENT(rxrpc_peer,
- TP_PROTO(unsigned int peer_debug_id, enum rxrpc_peer_trace op,
- int usage, const void *where),
+ TP_PROTO(unsigned int peer_debug_id, int ref, enum rxrpc_peer_trace why),

- TP_ARGS(peer_debug_id, op, usage, where),
+ TP_ARGS(peer_debug_id, ref, why),

TP_STRUCT__entry(
__field(unsigned int, peer )
- __field(int, op )
- __field(int, usage )
- __field(const void *, where )
+ __field(int, ref )
+ __field(int, why )
),

TP_fast_assign(
__entry->peer = peer_debug_id;
- __entry->op = op;
- __entry->usage = usage;
- __entry->where = where;
+ __entry->ref = ref;
+ __entry->why = why;
),

- TP_printk("P=%08x %s u=%d sp=%pSR",
+ TP_printk("P=%08x %s r=%d",
__entry->peer,
- __print_symbolic(__entry->op, rxrpc_peer_traces),
- __entry->usage,
- __entry->where)
+ __print_symbolic(__entry->why, rxrpc_peer_traces),
+ __entry->ref)
);

TRACE_EVENT(rxrpc_conn,
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index 989ebca899f3..7a0dc01741e7 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -328,7 +328,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
mutex_unlock(&call->user_mutex);
}

- rxrpc_put_peer(cp.peer);
+ rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp);
_leave(" = %p", call);
return call;
}
diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index dde9ce21ef48..6cb111e9761c 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -1063,14 +1063,15 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *,
const struct sockaddr_rxrpc *);
struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *, struct rxrpc_local *,
struct sockaddr_rxrpc *, gfp_t);
-struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t);
+struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t,
+ enum rxrpc_peer_trace);
void rxrpc_new_incoming_peer(struct rxrpc_sock *, struct rxrpc_local *,
struct rxrpc_peer *);
void rxrpc_destroy_all_peers(struct rxrpc_net *);
-struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *);
-struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *);
-void rxrpc_put_peer(struct rxrpc_peer *);
-void rxrpc_put_peer_locked(struct rxrpc_peer *);
+struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *, enum rxrpc_peer_trace);
+struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *, enum rxrpc_peer_trace);
+void rxrpc_put_peer(struct rxrpc_peer *, enum rxrpc_peer_trace);
+void rxrpc_put_peer_locked(struct rxrpc_peer *, enum rxrpc_peer_trace);

/*
* proc.c
diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index 1b12d4e28373..f6bc3b07c3e5 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -70,7 +70,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
head = b->peer_backlog_head;
tail = READ_ONCE(b->peer_backlog_tail);
if (CIRC_CNT(head, tail, size) < max) {
- struct rxrpc_peer *peer = rxrpc_alloc_peer(rx->local, gfp);
+ struct rxrpc_peer *peer;
+
+ peer = rxrpc_alloc_peer(rx->local, gfp, rxrpc_peer_new_prealloc);
if (!peer)
return -ENOMEM;
b->peer_backlog[head] = peer;
@@ -286,7 +288,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
return NULL;

if (!conn) {
- if (peer && !rxrpc_get_peer_maybe(peer))
+ if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_service_conn))
peer = NULL;
if (!peer) {
peer = b->peer_backlog[peer_tail];
@@ -323,7 +325,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
call->conn = conn;
call->security = conn->security;
call->security_ix = conn->security_ix;
- call->peer = rxrpc_get_peer(conn->peer);
+ call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_accept);
call->cong_ssthresh = call->peer->cong_ssthresh;
call->tx_last_sent = ktime_get_real();
return call;
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 59928f0a8fe1..1b725afd6e2c 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -636,7 +636,7 @@ static void rxrpc_destroy_call(struct work_struct *work)
rxrpc_delete_call_timer(call);

rxrpc_put_connection(call->conn);
- rxrpc_put_peer(call->peer);
+ rxrpc_put_peer(call->peer, rxrpc_peer_put_call);
kmem_cache_free(rxrpc_call_jar, call);
if (atomic_dec_and_test(&rxnet->nr_calls))
wake_up_var(&rxnet->nr_calls);
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index 9a69b4c1b182..9444da235a48 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -123,7 +123,7 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp,
bundle = kzalloc(sizeof(*bundle), gfp);
if (bundle) {
bundle->local = cp->local;
- bundle->peer = rxrpc_get_peer(cp->peer);
+ bundle->peer = rxrpc_get_peer(cp->peer, rxrpc_peer_get_bundle);
bundle->key = cp->key;
bundle->exclusive = cp->exclusive;
bundle->upgrade = cp->upgrade;
@@ -145,7 +145,7 @@ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle)

static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
{
- rxrpc_put_peer(bundle->peer);
+ rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle);
kfree(bundle);
}

@@ -207,7 +207,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
write_unlock(&rxnet->conn_lock);

rxrpc_get_bundle(bundle);
- rxrpc_get_peer(conn->peer);
+ rxrpc_get_peer(conn->peer, rxrpc_peer_get_client_conn);
rxrpc_get_local(conn->local, rxrpc_local_get_client_conn);
key_get(conn->key);

@@ -543,7 +543,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,

rxrpc_see_call(call);
list_del_init(&call->chan_wait_link);
- call->peer = rxrpc_get_peer(conn->peer);
+ call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call);
call->conn = rxrpc_get_connection(conn);
call->cid = conn->proto.cid | channel;
call->call_id = call_id;
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index 725359afeac0..554ee5dd3325 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -362,7 +362,7 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)
conn->security->clear(conn);
key_put(conn->key);
rxrpc_put_bundle(conn->bundle);
- rxrpc_put_peer(conn->peer);
+ rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn);

if (atomic_dec_and_test(&conn->local->rxnet->nr_conns))
wake_up_var(&conn->local->rxnet->nr_conns);
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index 3f8d104ecaa7..5e97d321ac38 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -168,7 +168,7 @@ void rxrpc_error_report(struct sock *sk)
}

peer = rxrpc_lookup_peer_local_rcu(local, skb, &srx);
- if (peer && !rxrpc_get_peer_maybe(peer))
+ if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_input_error))
peer = NULL;
if (!peer) {
rcu_read_unlock();
@@ -190,7 +190,7 @@ void rxrpc_error_report(struct sock *sk)
out:
rcu_read_unlock();
rxrpc_free_skb(skb, rxrpc_skb_freed);
- rxrpc_put_peer(peer);
+ rxrpc_put_peer(peer, rxrpc_peer_put_input_error);

_leave("");
}
@@ -263,7 +263,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
struct rxrpc_peer, keepalive_link);

list_del_init(&peer->keepalive_link);
- if (!rxrpc_get_peer_maybe(peer))
+ if (!rxrpc_get_peer_maybe(peer, rxrpc_peer_get_keepalive))
continue;

if (__rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive)) {
@@ -291,7 +291,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
&rxnet->peer_keepalive[slot & mask]);
rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive);
}
- rxrpc_put_peer_locked(peer);
+ rxrpc_put_peer_locked(peer, rxrpc_peer_put_keepalive);
}

spin_unlock_bh(&rxnet->peer_hash_lock);
diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
index bcef897560e7..9e682a60a800 100644
--- a/net/rxrpc/peer_object.c
+++ b/net/rxrpc/peer_object.c
@@ -205,9 +205,9 @@ static void rxrpc_assess_MTU_size(struct rxrpc_sock *rx,
/*
* Allocate a peer.
*/
-struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp,
+ enum rxrpc_peer_trace why)
{
- const void *here = __builtin_return_address(0);
struct rxrpc_peer *peer;

_enter("");
@@ -226,7 +226,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
rxrpc_peer_init_rtt(peer);

peer->cong_ssthresh = RXRPC_TX_MAX_WINDOW;
- trace_rxrpc_peer(peer->debug_id, rxrpc_peer_new, 1, here);
+ trace_rxrpc_peer(peer->debug_id, why, 1);
}

_leave(" = %p", peer);
@@ -282,7 +282,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx,

_enter("");

- peer = rxrpc_alloc_peer(local, gfp);
+ peer = rxrpc_alloc_peer(local, gfp, rxrpc_peer_new_client);
if (peer) {
memcpy(&peer->srx, srx, sizeof(*srx));
rxrpc_init_peer(rx, peer, hash_key);
@@ -294,6 +294,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx,

static void rxrpc_free_peer(struct rxrpc_peer *peer)
{
+ trace_rxrpc_peer(peer->debug_id, 0, rxrpc_peer_free);
rxrpc_put_local(peer->local, rxrpc_local_put_peer);
kfree_rcu(peer, rcu);
}
@@ -334,7 +335,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
/* search the peer list first */
rcu_read_lock();
peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
- if (peer && !rxrpc_get_peer_maybe(peer))
+ if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_lookup_client))
peer = NULL;
rcu_read_unlock();

@@ -352,7 +353,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,

/* Need to check that we aren't racing with someone else */
peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
- if (peer && !rxrpc_get_peer_maybe(peer))
+ if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_lookup_client))
peer = NULL;
if (!peer) {
hash_add_rcu(rxnet->peer_hash,
@@ -376,27 +377,26 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
/*
* Get a ref on a peer record.
*/
-struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer)
+struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer, enum rxrpc_peer_trace why)
{
- const void *here = __builtin_return_address(0);
int r;

__refcount_inc(&peer->ref, &r);
- trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, r + 1, here);
+ trace_rxrpc_peer(peer->debug_id, why, r + 1);
return peer;
}

/*
* Get a ref on a peer record unless its usage has already reached 0.
*/
-struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer)
+struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer,
+ enum rxrpc_peer_trace why)
{
- const void *here = __builtin_return_address(0);
int r;

if (peer) {
if (__refcount_inc_not_zero(&peer->ref, &r))
- trace_rxrpc_peer(peer->debug_id, rxrpc_peer_got, r + 1, here);
+ trace_rxrpc_peer(peer->debug_id, r + 1, why);
else
peer = NULL;
}
@@ -423,9 +423,8 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
/*
* Drop a ref on a peer record.
*/
-void rxrpc_put_peer(struct rxrpc_peer *peer)
+void rxrpc_put_peer(struct rxrpc_peer *peer, enum rxrpc_peer_trace why)
{
- const void *here = __builtin_return_address(0);
unsigned int debug_id;
bool dead;
int r;
@@ -433,7 +432,7 @@ void rxrpc_put_peer(struct rxrpc_peer *peer)
if (peer) {
debug_id = peer->debug_id;
dead = __refcount_dec_and_test(&peer->ref, &r);
- trace_rxrpc_peer(debug_id, rxrpc_peer_put, r - 1, here);
+ trace_rxrpc_peer(debug_id, r - 1, why);
if (dead)
__rxrpc_put_peer(peer);
}
@@ -443,15 +442,14 @@ void rxrpc_put_peer(struct rxrpc_peer *peer)
* Drop a ref on a peer record where the caller already holds the
* peer_hash_lock.
*/
-void rxrpc_put_peer_locked(struct rxrpc_peer *peer)
+void rxrpc_put_peer_locked(struct rxrpc_peer *peer, enum rxrpc_peer_trace why)
{
- const void *here = __builtin_return_address(0);
unsigned int debug_id = peer->debug_id;
bool dead;
int r;

dead = __refcount_dec_and_test(&peer->ref, &r);
- trace_rxrpc_peer(debug_id, rxrpc_peer_put, r - 1, here);
+ trace_rxrpc_peer(debug_id, r - 1, why);
if (dead) {
hash_del_rcu(&peer->hash_link);
list_del_init(&peer->keepalive_link);
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
index e5fd8a95bf71..cfe0badba0b3 100644
--- a/net/rxrpc/sendmsg.c
+++ b/net/rxrpc/sendmsg.c
@@ -604,7 +604,7 @@ rxrpc_new_client_call_for_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg,
atomic_inc_return(&rxrpc_debug_id));
/* The socket is now unlocked */

- rxrpc_put_peer(cp.peer);
+ rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp);
_leave(" = %p\n", call);
return call;
}


2022-11-23 10:32:33

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 06/13] rxrpc: Drop rxrpc_conn_parameters from rxrpc_connection and rxrpc_bundle

Remove the rxrpc_conn_parameters struct from the rxrpc_connection and
rxrpc_bundle structs and emplace the members directly. These are going to
get filled in from the rxrpc_call struct in future.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

net/rxrpc/ar-internal.h | 16 ++++++++++++--
net/rxrpc/call_accept.c | 6 +++--
net/rxrpc/call_event.c | 2 +-
net/rxrpc/call_object.c | 6 +++--
net/rxrpc/conn_client.c | 53 +++++++++++++++++++++++++++------------------
net/rxrpc/conn_event.c | 26 +++++++++++-----------
net/rxrpc/conn_object.c | 22 +++++++++----------
net/rxrpc/conn_service.c | 6 +++--
net/rxrpc/input.c | 4 ++-
net/rxrpc/key.c | 2 +-
net/rxrpc/output.c | 32 ++++++++++++++-------------
net/rxrpc/proc.c | 6 +++--
net/rxrpc/rxkad.c | 54 +++++++++++++++++++++++-----------------------
net/rxrpc/security.c | 4 ++-
14 files changed, 131 insertions(+), 108 deletions(-)

diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 36ece1efb1d4..7c48b0163032 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -403,12 +403,18 @@ enum rxrpc_conn_proto_state {
* RxRPC client connection bundle.
*/
struct rxrpc_bundle {
- struct rxrpc_conn_parameters params;
+ struct rxrpc_local *local; /* Representation of local endpoint */
+ struct rxrpc_peer *peer; /* Remote endpoint */
+ struct key *key; /* Security details */
refcount_t ref;
atomic_t active; /* Number of active users */
unsigned int debug_id;
+ u32 security_level; /* Security level selected */
+ u16 service_id; /* Service ID for this connection */
bool try_upgrade; /* True if the bundle is attempting upgrade */
bool alloc_conn; /* True if someone's getting a conn */
+ bool exclusive; /* T if conn is exclusive */
+ bool upgrade; /* T if service ID can be upgraded */
short alloc_error; /* Error from last conn allocation */
spinlock_t channel_lock;
struct rb_node local_node; /* Node in local->client_conns */
@@ -424,7 +430,9 @@ struct rxrpc_bundle {
*/
struct rxrpc_connection {
struct rxrpc_conn_proto proto;
- struct rxrpc_conn_parameters params;
+ struct rxrpc_local *local; /* Representation of local endpoint */
+ struct rxrpc_peer *peer; /* Remote endpoint */
+ struct key *key; /* Security details */

refcount_t ref;
struct rcu_head rcu;
@@ -471,9 +479,13 @@ struct rxrpc_connection {
atomic_t serial; /* packet serial number counter */
unsigned int hi_serial; /* highest serial number received */
u32 service_id; /* Service ID, possibly upgraded */
+ u32 security_level; /* Security level selected */
u8 security_ix; /* security type */
u8 out_clientflag; /* RXRPC_CLIENT_INITIATED if we are client */
u8 bundle_shift; /* Index into bundle->avail_chans */
+ bool exclusive; /* T if conn is exclusive */
+ bool upgrade; /* T if service ID can be upgraded */
+ u16 orig_service_id; /* Originally requested service ID */
short error; /* Local error code */
};

diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index 48790ee77019..4888959e4727 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -305,8 +305,8 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
b->conn_backlog[conn_tail] = NULL;
smp_store_release(&b->conn_backlog_tail,
(conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
- conn->params.local = rxrpc_get_local(local);
- conn->params.peer = peer;
+ conn->local = rxrpc_get_local(local);
+ conn->peer = peer;
rxrpc_see_connection(conn);
rxrpc_new_incoming_connection(rx, conn, sec, skb);
} else {
@@ -323,7 +323,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
call->conn = conn;
call->security = conn->security;
call->security_ix = conn->security_ix;
- call->peer = rxrpc_get_peer(conn->params.peer);
+ call->peer = rxrpc_get_peer(conn->peer);
call->cong_ssthresh = call->peer->cong_ssthresh;
call->tx_last_sent = ktime_get_real();
return call;
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 1e21a708390e..75e7c2903b2e 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -69,7 +69,7 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial,
void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason,
rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why)
{
- struct rxrpc_local *local = call->conn->params.local;
+ struct rxrpc_local *local = call->conn->local;
struct rxrpc_txbuf *txb;

if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags))
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index e36a317b2e9a..59928f0a8fe1 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -417,9 +417,9 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
conn->channels[chan].call_id = call->call_id;
rcu_assign_pointer(conn->channels[chan].call, call);

- spin_lock(&conn->params.peer->lock);
- hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets);
- spin_unlock(&conn->params.peer->lock);
+ spin_lock(&conn->peer->lock);
+ hlist_add_head_rcu(&call->error_link, &conn->peer->error_targets);
+ spin_unlock(&conn->peer->lock);

rxrpc_start_call_timer(call);
_leave("");
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index 2b76fbffd4dd..71404b33623f 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -51,7 +51,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle);
static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn,
gfp_t gfp)
{
- struct rxrpc_net *rxnet = conn->params.local->rxnet;
+ struct rxrpc_net *rxnet = conn->local->rxnet;
int id;

_enter("");
@@ -122,8 +122,13 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp,

bundle = kzalloc(sizeof(*bundle), gfp);
if (bundle) {
- bundle->params = *cp;
- rxrpc_get_peer(bundle->params.peer);
+ bundle->local = cp->local;
+ bundle->peer = rxrpc_get_peer(cp->peer);
+ bundle->key = cp->key;
+ bundle->exclusive = cp->exclusive;
+ bundle->upgrade = cp->upgrade;
+ bundle->service_id = cp->service_id;
+ bundle->security_level = cp->security_level;
refcount_set(&bundle->ref, 1);
atomic_set(&bundle->active, 1);
spin_lock_init(&bundle->channel_lock);
@@ -140,7 +145,7 @@ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle)

static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
{
- rxrpc_put_peer(bundle->params.peer);
+ rxrpc_put_peer(bundle->peer);
kfree(bundle);
}

@@ -164,7 +169,7 @@ static struct rxrpc_connection *
rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
{
struct rxrpc_connection *conn;
- struct rxrpc_net *rxnet = bundle->params.local->rxnet;
+ struct rxrpc_net *rxnet = bundle->local->rxnet;
int ret;

_enter("");
@@ -177,10 +182,16 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)

refcount_set(&conn->ref, 1);
conn->bundle = bundle;
- conn->params = bundle->params;
+ conn->local = bundle->local;
+ conn->peer = bundle->peer;
+ conn->key = bundle->key;
+ conn->exclusive = bundle->exclusive;
+ conn->upgrade = bundle->upgrade;
+ conn->orig_service_id = bundle->service_id;
+ conn->security_level = bundle->security_level;
conn->out_clientflag = RXRPC_CLIENT_INITIATED;
conn->state = RXRPC_CONN_CLIENT;
- conn->service_id = conn->params.service_id;
+ conn->service_id = conn->orig_service_id;

ret = rxrpc_get_client_connection_id(conn, gfp);
if (ret < 0)
@@ -196,9 +207,9 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
write_unlock(&rxnet->conn_lock);

rxrpc_get_bundle(bundle);
- rxrpc_get_peer(conn->params.peer);
- rxrpc_get_local(conn->params.local);
- key_get(conn->params.key);
+ rxrpc_get_peer(conn->peer);
+ rxrpc_get_local(conn->local);
+ key_get(conn->key);

trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client,
refcount_read(&conn->ref),
@@ -228,7 +239,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn)
if (!conn)
goto dont_reuse;

- rxnet = conn->params.local->rxnet;
+ rxnet = conn->local->rxnet;
if (test_bit(RXRPC_CONN_DONT_REUSE, &conn->flags))
goto dont_reuse;

@@ -285,7 +296,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
while (p) {
bundle = rb_entry(p, struct rxrpc_bundle, local_node);

-#define cmp(X) ((long)bundle->params.X - (long)cp->X)
+#define cmp(X) ((long)bundle->X - (long)cp->X)
diff = (cmp(peer) ?:
cmp(key) ?:
cmp(security_level) ?:
@@ -314,7 +325,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
parent = *pp;
bundle = rb_entry(parent, struct rxrpc_bundle, local_node);

-#define cmp(X) ((long)bundle->params.X - (long)cp->X)
+#define cmp(X) ((long)bundle->X - (long)cp->X)
diff = (cmp(peer) ?:
cmp(key) ?:
cmp(security_level) ?:
@@ -532,7 +543,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,

rxrpc_see_call(call);
list_del_init(&call->chan_wait_link);
- call->peer = rxrpc_get_peer(conn->params.peer);
+ call->peer = rxrpc_get_peer(conn->peer);
call->conn = rxrpc_get_connection(conn);
call->cid = conn->proto.cid | channel;
call->call_id = call_id;
@@ -569,7 +580,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,
*/
static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connection *conn)
{
- struct rxrpc_net *rxnet = bundle->params.local->rxnet;
+ struct rxrpc_net *rxnet = bundle->local->rxnet;
bool drop_ref;

if (!list_empty(&conn->cache_link)) {
@@ -795,7 +806,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call
{
struct rxrpc_connection *conn;
struct rxrpc_channel *chan = NULL;
- struct rxrpc_net *rxnet = bundle->params.local->rxnet;
+ struct rxrpc_net *rxnet = bundle->local->rxnet;
unsigned int channel;
bool may_reuse;
u32 cid;
@@ -936,11 +947,11 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)
*/
static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle)
{
- struct rxrpc_local *local = bundle->params.local;
+ struct rxrpc_local *local = bundle->local;
bool need_put = false;

if (atomic_dec_and_lock(&bundle->active, &local->client_bundles_lock)) {
- if (!bundle->params.exclusive) {
+ if (!bundle->exclusive) {
_debug("erase bundle");
rb_erase(&bundle->local_node, &local->client_bundles);
need_put = true;
@@ -957,7 +968,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle)
*/
static void rxrpc_kill_client_conn(struct rxrpc_connection *conn)
{
- struct rxrpc_local *local = conn->params.local;
+ struct rxrpc_local *local = conn->local;
struct rxrpc_net *rxnet = local->rxnet;

_enter("C=%x", conn->debug_id);
@@ -1036,7 +1047,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work)
expiry = rxrpc_conn_idle_client_expiry;
if (nr_conns > rxrpc_reap_client_connections)
expiry = rxrpc_conn_idle_client_fast_expiry;
- if (conn->params.local->service_closed)
+ if (conn->local->service_closed)
expiry = rxrpc_closed_conn_expiry * HZ;

conn_expires_at = conn->idle_timestamp + expiry;
@@ -1110,7 +1121,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local)

list_for_each_entry_safe(conn, tmp, &rxnet->idle_client_conns,
cache_link) {
- if (conn->params.local == local) {
+ if (conn->local == local) {
trace_rxrpc_client(conn, -1, rxrpc_client_discard);
list_move(&conn->cache_link, &graveyard);
}
diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index d5549cbfc71b..71ed6b9dc63a 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -52,8 +52,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
if (skb && call_id != sp->hdr.callNumber)
return;

- msg.msg_name = &conn->params.peer->srx.transport;
- msg.msg_namelen = conn->params.peer->srx.transport_len;
+ msg.msg_name = &conn->peer->srx.transport;
+ msg.msg_namelen = conn->peer->srx.transport_len;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
@@ -86,8 +86,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
break;

case RXRPC_PACKET_TYPE_ACK:
- mtu = conn->params.peer->if_mtu;
- mtu -= conn->params.peer->hdrsize;
+ mtu = conn->peer->if_mtu;
+ mtu -= conn->peer->hdrsize;
pkt.ack.bufferSpace = 0;
pkt.ack.maxSkew = htons(skb ? skb->priority : 0);
pkt.ack.firstPacket = htonl(chan->last_seq + 1);
@@ -131,8 +131,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
break;
}

- ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, ioc, len);
- conn->params.peer->last_tx_at = ktime_get_seconds();
+ ret = kernel_sendmsg(conn->local->socket, &msg, iov, ioc, len);
+ conn->peer->last_tx_at = ktime_get_seconds();
if (ret < 0)
trace_rxrpc_tx_fail(chan->call_debug_id, serial, ret,
rxrpc_tx_point_call_final_resend);
@@ -211,8 +211,8 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags);
spin_unlock_bh(&conn->state_lock);

- msg.msg_name = &conn->params.peer->srx.transport;
- msg.msg_namelen = conn->params.peer->srx.transport_len;
+ msg.msg_name = &conn->peer->srx.transport;
+ msg.msg_namelen = conn->peer->srx.transport_len;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
@@ -241,7 +241,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial);
whdr.serial = htonl(serial);

- ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+ ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len);
if (ret < 0) {
trace_rxrpc_tx_fail(conn->debug_id, serial, ret,
rxrpc_tx_point_conn_abort);
@@ -251,7 +251,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,

trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort);

- conn->params.peer->last_tx_at = ktime_get_seconds();
+ conn->peer->last_tx_at = ktime_get_seconds();

_leave(" = 0");
return 0;
@@ -330,7 +330,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return ret;

ret = conn->security->init_connection_security(
- conn, conn->params.key->payload.data[0]);
+ conn, conn->key->payload.data[0]);
if (ret < 0)
return ret;

@@ -484,9 +484,9 @@ void rxrpc_process_connection(struct work_struct *work)

rxrpc_see_connection(conn);

- if (__rxrpc_use_local(conn->params.local)) {
+ if (__rxrpc_use_local(conn->local)) {
rxrpc_do_process_connection(conn);
- rxrpc_unuse_local(conn->params.local);
+ rxrpc_unuse_local(conn->local);
}

rxrpc_put_connection(conn);
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index d5d15389406f..ad6e5ee1f069 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -120,10 +120,10 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
}

if (conn->proto.epoch != k.epoch ||
- conn->params.local != local)
+ conn->local != local)
goto not_found;

- peer = conn->params.peer;
+ peer = conn->peer;
switch (srx.transport.family) {
case AF_INET:
if (peer->srx.transport.sin.sin_port !=
@@ -231,7 +231,7 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
*/
void rxrpc_kill_connection(struct rxrpc_connection *conn)
{
- struct rxrpc_net *rxnet = conn->params.local->rxnet;
+ struct rxrpc_net *rxnet = conn->local->rxnet;

ASSERT(!rcu_access_pointer(conn->channels[0].call) &&
!rcu_access_pointer(conn->channels[1].call) &&
@@ -340,7 +340,7 @@ void rxrpc_put_service_conn(struct rxrpc_connection *conn)
__refcount_dec(&conn->ref, &r);
trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, r - 1, here);
if (r - 1 == 1)
- rxrpc_set_service_reap_timer(conn->params.local->rxnet,
+ rxrpc_set_service_reap_timer(conn->local->rxnet,
jiffies + rxrpc_connection_expiry);
}

@@ -360,13 +360,13 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)
rxrpc_purge_queue(&conn->rx_queue);

conn->security->clear(conn);
- key_put(conn->params.key);
+ key_put(conn->key);
rxrpc_put_bundle(conn->bundle);
- rxrpc_put_peer(conn->params.peer);
+ rxrpc_put_peer(conn->peer);

- if (atomic_dec_and_test(&conn->params.local->rxnet->nr_conns))
- wake_up_var(&conn->params.local->rxnet->nr_conns);
- rxrpc_put_local(conn->params.local);
+ if (atomic_dec_and_test(&conn->local->rxnet->nr_conns))
+ wake_up_var(&conn->local->rxnet->nr_conns);
+ rxrpc_put_local(conn->local);

kfree(conn);
_leave("");
@@ -397,10 +397,10 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
if (conn->state == RXRPC_CONN_SERVICE_PREALLOC)
continue;

- if (rxnet->live && !conn->params.local->dead) {
+ if (rxnet->live && !conn->local->dead) {
idle_timestamp = READ_ONCE(conn->idle_timestamp);
expire_at = idle_timestamp + rxrpc_connection_expiry * HZ;
- if (conn->params.local->service_closed)
+ if (conn->local->service_closed)
expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ;

_debug("reap CONN %d { u=%d,t=%ld }",
diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
index 75f903099eb0..a3b91864ef21 100644
--- a/net/rxrpc/conn_service.c
+++ b/net/rxrpc/conn_service.c
@@ -164,7 +164,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,

conn->proto.epoch = sp->hdr.epoch;
conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK;
- conn->params.service_id = sp->hdr.serviceId;
+ conn->orig_service_id = sp->hdr.serviceId;
conn->service_id = sp->hdr.serviceId;
conn->security_ix = sp->hdr.securityIndex;
conn->out_clientflag = 0;
@@ -183,7 +183,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,
conn->service_id = rx->service_upgrade.to;

/* Make the connection a target for incoming packets. */
- rxrpc_publish_service_conn(conn->params.peer, conn);
+ rxrpc_publish_service_conn(conn->peer, conn);
}

/*
@@ -192,7 +192,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,
*/
void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn)
{
- struct rxrpc_peer *peer = conn->params.peer;
+ struct rxrpc_peer *peer = conn->peer;

write_seqlock_bh(&peer->service_conn_lock);
if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags))
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index e2461f29d765..44caf88e04b8 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -1339,10 +1339,10 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)

if (!test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags))
goto reupgrade;
- old_id = cmpxchg(&conn->service_id, conn->params.service_id,
+ old_id = cmpxchg(&conn->service_id, conn->orig_service_id,
sp->hdr.serviceId);

- if (old_id != conn->params.service_id &&
+ if (old_id != conn->orig_service_id &&
old_id != sp->hdr.serviceId)
goto reupgrade;
}
diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c
index 8d2073e0e3da..1ecddcb3745a 100644
--- a/net/rxrpc/key.c
+++ b/net/rxrpc/key.c
@@ -513,7 +513,7 @@ int rxrpc_get_server_data_key(struct rxrpc_connection *conn,
if (ret < 0)
goto error;

- conn->params.key = key;
+ conn->key = key;
_leave(" = 0 [%d]", key_serial(key));
return 0;

diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
index 635acf3dbd77..b5d8eac8c49c 100644
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -142,8 +142,8 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
txb->ack.reason = RXRPC_ACK_IDLE;
}

- mtu = conn->params.peer->if_mtu;
- mtu -= conn->params.peer->hdrsize;
+ mtu = conn->peer->if_mtu;
+ mtu -= conn->peer->hdrsize;
jmax = rxrpc_rx_jumbo_max;
qsize = (window - 1) - call->rx_consumed;
rsize = max_t(int, call->rx_winsize - qsize, 0);
@@ -259,7 +259,7 @@ static int rxrpc_send_ack_packet(struct rxrpc_local *local, struct rxrpc_txbuf *
txb->ack.previousPacket = htonl(call->rx_highest_seq);

iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len);
- ret = do_udp_sendmsg(conn->params.local->socket, &msg, len);
+ ret = do_udp_sendmsg(conn->local->socket, &msg, len);
call->peer->last_tx_at = ktime_get_seconds();
if (ret < 0)
trace_rxrpc_tx_fail(call->debug_id, serial, ret,
@@ -368,8 +368,8 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call)
pkt.whdr.serial = htonl(serial);

iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, sizeof(pkt));
- ret = do_udp_sendmsg(conn->params.local->socket, &msg, sizeof(pkt));
- conn->params.peer->last_tx_at = ktime_get_seconds();
+ ret = do_udp_sendmsg(conn->local->socket, &msg, sizeof(pkt));
+ conn->peer->last_tx_at = ktime_get_seconds();
if (ret < 0)
trace_rxrpc_tx_fail(call->debug_id, serial, ret,
rxrpc_tx_point_call_abort);
@@ -473,7 +473,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
if (txb->len >= call->peer->maxdata)
goto send_fragmentable;

- down_read(&conn->params.local->defrag_sem);
+ down_read(&conn->local->defrag_sem);

txb->last_sent = ktime_get_real();
if (txb->wire.flags & RXRPC_REQUEST_ACK)
@@ -486,10 +486,10 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
* message and update the peer record
*/
rxrpc_inc_stat(call->rxnet, stat_tx_data_send);
- ret = do_udp_sendmsg(conn->params.local->socket, &msg, len);
- conn->params.peer->last_tx_at = ktime_get_seconds();
+ ret = do_udp_sendmsg(conn->local->socket, &msg, len);
+ conn->peer->last_tx_at = ktime_get_seconds();

- up_read(&conn->params.local->defrag_sem);
+ up_read(&conn->local->defrag_sem);
if (ret < 0) {
rxrpc_cancel_rtt_probe(call, serial, rtt_slot);
trace_rxrpc_tx_fail(call->debug_id, serial, ret,
@@ -549,22 +549,22 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
/* attempt to send this message with fragmentation enabled */
_debug("send fragment");

- down_write(&conn->params.local->defrag_sem);
+ down_write(&conn->local->defrag_sem);

txb->last_sent = ktime_get_real();
if (txb->wire.flags & RXRPC_REQUEST_ACK)
rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data);

- switch (conn->params.local->srx.transport.family) {
+ switch (conn->local->srx.transport.family) {
case AF_INET6:
case AF_INET:
- ip_sock_set_mtu_discover(conn->params.local->socket->sk,
+ ip_sock_set_mtu_discover(conn->local->socket->sk,
IP_PMTUDISC_DONT);
rxrpc_inc_stat(call->rxnet, stat_tx_data_send_frag);
- ret = do_udp_sendmsg(conn->params.local->socket, &msg, len);
- conn->params.peer->last_tx_at = ktime_get_seconds();
+ ret = do_udp_sendmsg(conn->local->socket, &msg, len);
+ conn->peer->last_tx_at = ktime_get_seconds();

- ip_sock_set_mtu_discover(conn->params.local->socket->sk,
+ ip_sock_set_mtu_discover(conn->local->socket->sk,
IP_PMTUDISC_DO);
break;

@@ -582,7 +582,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
}
rxrpc_tx_backoff(call, ret);

- up_write(&conn->params.local->defrag_sem);
+ up_write(&conn->local->defrag_sem);
goto done;
}

diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
index fae22a8b38d6..bb2edf6db896 100644
--- a/net/rxrpc/proc.c
+++ b/net/rxrpc/proc.c
@@ -172,9 +172,9 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
goto print;
}

- sprintf(lbuff, "%pISpc", &conn->params.local->srx.transport);
+ sprintf(lbuff, "%pISpc", &conn->local->srx.transport);

- sprintf(rbuff, "%pISpc", &conn->params.peer->srx.transport);
+ sprintf(rbuff, "%pISpc", &conn->peer->srx.transport);
print:
seq_printf(seq,
"UDP %-47.47s %-47.47s %4x %08x %s %3u"
@@ -186,7 +186,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
rxrpc_conn_is_service(conn) ? "Svc" : "Clt",
refcount_read(&conn->ref),
rxrpc_conn_states[conn->state],
- key_serial(conn->params.key),
+ key_serial(conn->key),
atomic_read(&conn->serial),
conn->hi_serial,
conn->channels[0].call_id,
diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
index 36cf40442a7e..d1233720e05f 100644
--- a/net/rxrpc/rxkad.c
+++ b/net/rxrpc/rxkad.c
@@ -103,7 +103,7 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn,
struct crypto_sync_skcipher *ci;
int ret;

- _enter("{%d},{%x}", conn->debug_id, key_serial(conn->params.key));
+ _enter("{%d},{%x}", conn->debug_id, key_serial(conn->key));

conn->security_ix = token->security_index;

@@ -118,7 +118,7 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn,
sizeof(token->kad->session_key)) < 0)
BUG();

- switch (conn->params.security_level) {
+ switch (conn->security_level) {
case RXRPC_SECURITY_PLAIN:
case RXRPC_SECURITY_AUTH:
case RXRPC_SECURITY_ENCRYPT:
@@ -150,7 +150,7 @@ static int rxkad_how_much_data(struct rxrpc_call *call, size_t remain,
{
size_t shdr, buf_size, chunk;

- switch (call->conn->params.security_level) {
+ switch (call->conn->security_level) {
default:
buf_size = chunk = min_t(size_t, remain, RXRPC_JUMBO_DATALEN);
shdr = 0;
@@ -192,7 +192,7 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn,

_enter("");

- if (!conn->params.key)
+ if (!conn->key)
return 0;

tmpbuf = kmalloc(tmpsize, GFP_KERNEL);
@@ -205,7 +205,7 @@ static int rxkad_prime_packet_security(struct rxrpc_connection *conn,
return -ENOMEM;
}

- token = conn->params.key->payload.data[0];
+ token = conn->key->payload.data[0];
memcpy(&iv, token->kad->session_key, sizeof(iv));

tmpbuf[0] = htonl(conn->proto.epoch);
@@ -317,7 +317,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
}

/* encrypt from the session key */
- token = call->conn->params.key->payload.data[0];
+ token = call->conn->key->payload.data[0];
memcpy(&iv, token->kad->session_key, sizeof(iv));

sg_init_one(&sg, txb->data, txb->len);
@@ -344,13 +344,13 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
int ret;

_enter("{%d{%x}},{#%u},%u,",
- call->debug_id, key_serial(call->conn->params.key),
+ call->debug_id, key_serial(call->conn->key),
txb->seq, txb->len);

if (!call->conn->rxkad.cipher)
return 0;

- ret = key_validate(call->conn->params.key);
+ ret = key_validate(call->conn->key);
if (ret < 0)
return ret;

@@ -380,7 +380,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
y = 1; /* zero checksums are not permitted */
txb->wire.cksum = htons(y);

- switch (call->conn->params.security_level) {
+ switch (call->conn->security_level) {
case RXRPC_SECURITY_PLAIN:
ret = 0;
break;
@@ -525,7 +525,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
}

/* decrypt from the session key */
- token = call->conn->params.key->payload.data[0];
+ token = call->conn->key->payload.data[0];
memcpy(&iv, token->kad->session_key, sizeof(iv));

skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher);
@@ -596,7 +596,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb)
u32 x, y;

_enter("{%d{%x}},{#%u}",
- call->debug_id, key_serial(call->conn->params.key), seq);
+ call->debug_id, key_serial(call->conn->key), seq);

if (!call->conn->rxkad.cipher)
return 0;
@@ -632,7 +632,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb)
goto protocol_error;
}

- switch (call->conn->params.security_level) {
+ switch (call->conn->security_level) {
case RXRPC_SECURITY_PLAIN:
ret = 0;
break;
@@ -678,8 +678,8 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn)
challenge.min_level = htonl(0);
challenge.__padding = 0;

- msg.msg_name = &conn->params.peer->srx.transport;
- msg.msg_namelen = conn->params.peer->srx.transport_len;
+ msg.msg_name = &conn->peer->srx.transport;
+ msg.msg_namelen = conn->peer->srx.transport_len;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
@@ -705,14 +705,14 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn)
serial = atomic_inc_return(&conn->serial);
whdr.serial = htonl(serial);

- ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+ ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len);
if (ret < 0) {
trace_rxrpc_tx_fail(conn->debug_id, serial, ret,
rxrpc_tx_point_rxkad_challenge);
return -EAGAIN;
}

- conn->params.peer->last_tx_at = ktime_get_seconds();
+ conn->peer->last_tx_at = ktime_get_seconds();
trace_rxrpc_tx_packet(conn->debug_id, &whdr,
rxrpc_tx_point_rxkad_challenge);
_leave(" = 0");
@@ -736,8 +736,8 @@ static int rxkad_send_response(struct rxrpc_connection *conn,

_enter("");

- msg.msg_name = &conn->params.peer->srx.transport;
- msg.msg_namelen = conn->params.peer->srx.transport_len;
+ msg.msg_name = &conn->peer->srx.transport;
+ msg.msg_namelen = conn->peer->srx.transport_len;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
@@ -762,14 +762,14 @@ static int rxkad_send_response(struct rxrpc_connection *conn,
serial = atomic_inc_return(&conn->serial);
whdr.serial = htonl(serial);

- ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 3, len);
+ ret = kernel_sendmsg(conn->local->socket, &msg, iov, 3, len);
if (ret < 0) {
trace_rxrpc_tx_fail(conn->debug_id, serial, ret,
rxrpc_tx_point_rxkad_response);
return -EAGAIN;
}

- conn->params.peer->last_tx_at = ktime_get_seconds();
+ conn->peer->last_tx_at = ktime_get_seconds();
_leave(" = 0");
return 0;
}
@@ -832,15 +832,15 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
u32 version, nonce, min_level, abort_code;
int ret;

- _enter("{%d,%x}", conn->debug_id, key_serial(conn->params.key));
+ _enter("{%d,%x}", conn->debug_id, key_serial(conn->key));

eproto = tracepoint_string("chall_no_key");
abort_code = RX_PROTOCOL_ERROR;
- if (!conn->params.key)
+ if (!conn->key)
goto protocol_error;

abort_code = RXKADEXPIRED;
- ret = key_validate(conn->params.key);
+ ret = key_validate(conn->key);
if (ret < 0)
goto other_error;

@@ -863,10 +863,10 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,

abort_code = RXKADLEVELFAIL;
ret = -EACCES;
- if (conn->params.security_level < min_level)
+ if (conn->security_level < min_level)
goto other_error;

- token = conn->params.key->payload.data[0];
+ token = conn->key->payload.data[0];

/* build the response packet */
resp = kzalloc(sizeof(struct rxkad_response), GFP_NOFS);
@@ -878,7 +878,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
resp->encrypted.cid = htonl(conn->proto.cid);
resp->encrypted.securityIndex = htonl(conn->security_ix);
resp->encrypted.inc_nonce = htonl(nonce + 1);
- resp->encrypted.level = htonl(conn->params.security_level);
+ resp->encrypted.level = htonl(conn->security_level);
resp->kvno = htonl(token->kad->kvno);
resp->ticket_len = htonl(token->kad->ticket_len);
resp->encrypted.call_id[0] = htonl(conn->channels[0].call_counter);
@@ -1226,7 +1226,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
level = ntohl(response->encrypted.level);
if (level > RXRPC_SECURITY_ENCRYPT)
goto protocol_error_free;
- conn->params.security_level = level;
+ conn->security_level = level;

/* create a key to hold the security data and expiration time - after
* this the connection security can be handled in exactly the same way
diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c
index 50cb5f1ee0c0..e6ddac9b3732 100644
--- a/net/rxrpc/security.c
+++ b/net/rxrpc/security.c
@@ -69,7 +69,7 @@ int rxrpc_init_client_conn_security(struct rxrpc_connection *conn)
{
const struct rxrpc_security *sec;
struct rxrpc_key_token *token;
- struct key *key = conn->params.key;
+ struct key *key = conn->key;
int ret;

_enter("{%d},{%x}", conn->debug_id, key_serial(key));
@@ -163,7 +163,7 @@ struct key *rxrpc_look_up_server_security(struct rxrpc_connection *conn,

rcu_read_lock();

- rx = rcu_dereference(conn->params.local->service);
+ rx = rcu_dereference(conn->local->service);
if (!rx)
goto out;



2022-11-23 10:34:08

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 07/13] rxrpc: Extract the code from a received ABORT packet much earlier

Extract the code from a received rx ABORT packet much earlier and in a
single place and harmonise the responses to malformed ABORT packets.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

net/rxrpc/conn_event.c | 12 +-----------
net/rxrpc/input.c | 31 +++++++++++++++++++------------
2 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index 71ed6b9dc63a..f890a30c4df6 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -282,8 +282,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
u32 *_abort_code)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
- __be32 wtmp;
- u32 abort_code;
int loop, ret;

if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) {
@@ -305,16 +303,8 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return 0;

case RXRPC_PACKET_TYPE_ABORT:
- if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
- &wtmp, sizeof(wtmp)) < 0) {
- trace_rxrpc_rx_eproto(NULL, sp->hdr.serial,
- tracepoint_string("bad_abort"));
- return -EPROTO;
- }
- abort_code = ntohl(wtmp);
-
conn->error = -ECONNABORTED;
- conn->abort_code = abort_code;
+ conn->abort_code = skb->priority;
conn->state = RXRPC_CONN_REMOTELY_ABORTED;
set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags);
rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial);
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index 44caf88e04b8..42c8257158f7 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -1019,20 +1019,11 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb)
static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
- __be32 wtmp;
- u32 abort_code = RX_CALL_DEAD;
-
- _enter("");
-
- if (skb->len >= 4 &&
- skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
- &wtmp, sizeof(wtmp)) >= 0)
- abort_code = ntohl(wtmp);

- trace_rxrpc_rx_abort(call, sp->hdr.serial, abort_code);
+ trace_rxrpc_rx_abort(call, sp->hdr.serial, skb->priority);

rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
- abort_code, -ECONNABORTED);
+ skb->priority, -ECONNABORTED);
}

/*
@@ -1193,6 +1184,20 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb)
return 0;
}

+/*
+ * Extract the abort code from an ABORT packet and stash it in skb->priority.
+ */
+static bool rxrpc_extract_abort(struct sk_buff *skb)
+{
+ __be32 wtmp;
+
+ if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
+ &wtmp, sizeof(wtmp)) < 0)
+ return false;
+ skb->priority = ntohl(wtmp);
+ return true;
+}
+
/*
* handle data received on the local endpoint
* - may be called in interrupt context
@@ -1264,8 +1269,10 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
case RXRPC_PACKET_TYPE_ACKALL:
if (sp->hdr.callNumber == 0)
goto bad_message;
- fallthrough;
+ break;
case RXRPC_PACKET_TYPE_ABORT:
+ if (!rxrpc_extract_abort(skb))
+ return true; /* Just discard if malformed */
break;

case RXRPC_PACKET_TYPE_DATA:


2022-11-23 10:34:10

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 12/13] rxrpc: Trace rxrpc_bundle refcount

Add a tracepoint for the rxrpc_bundle refcounting.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 34 ++++++++++++++++++++++++++++++++++
net/rxrpc/ar-internal.h | 4 ++--
net/rxrpc/conn_client.c | 27 ++++++++++++++++-----------
net/rxrpc/conn_object.c | 2 +-
net/rxrpc/conn_service.c | 3 ++-
5 files changed, 55 insertions(+), 15 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index 34151d17e7b6..8af077c2927a 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -81,6 +81,15 @@
EM(rxrpc_peer_put_input_error, "PUT inpt-err") \
E_(rxrpc_peer_put_keepalive, "PUT keepaliv")

+#define rxrpc_bundle_traces \
+ EM(rxrpc_bundle_free, "FREE ") \
+ EM(rxrpc_bundle_get_client_call, "GET clt-call") \
+ EM(rxrpc_bundle_get_client_conn, "GET clt-conn") \
+ EM(rxrpc_bundle_get_service_conn, "GET svc-conn") \
+ EM(rxrpc_bundle_put_conn, "PUT conn ") \
+ EM(rxrpc_bundle_put_discard, "PUT discard ") \
+ E_(rxrpc_bundle_new, "NEW ")
+
#define rxrpc_conn_traces \
EM(rxrpc_conn_free, "FREE ") \
EM(rxrpc_conn_get_activate_call, "GET act-call") \
@@ -362,6 +371,7 @@
#define EM(a, b) a,
#define E_(a, b) a

+enum rxrpc_bundle_trace { rxrpc_bundle_traces } __mode(byte);
enum rxrpc_call_trace { rxrpc_call_traces } __mode(byte);
enum rxrpc_client_trace { rxrpc_client_traces } __mode(byte);
enum rxrpc_congest_change { rxrpc_congest_changes } __mode(byte);
@@ -391,6 +401,7 @@ enum rxrpc_txqueue_trace { rxrpc_txqueue_traces } __mode(byte);
#define EM(a, b) TRACE_DEFINE_ENUM(a);
#define E_(a, b) TRACE_DEFINE_ENUM(a);

+rxrpc_bundle_traces;
rxrpc_call_traces;
rxrpc_client_traces;
rxrpc_congest_changes;
@@ -468,6 +479,29 @@ TRACE_EVENT(rxrpc_peer,
__entry->ref)
);

+TRACE_EVENT(rxrpc_bundle,
+ TP_PROTO(unsigned int bundle_debug_id, int ref, enum rxrpc_bundle_trace why),
+
+ TP_ARGS(bundle_debug_id, ref, why),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, bundle )
+ __field(int, ref )
+ __field(int, why )
+ ),
+
+ TP_fast_assign(
+ __entry->bundle = bundle_debug_id;
+ __entry->ref = ref;
+ __entry->why = why;
+ ),
+
+ TP_printk("CB=%08x %s r=%d",
+ __entry->bundle,
+ __print_symbolic(__entry->why, rxrpc_bundle_traces),
+ __entry->ref)
+ );
+
TRACE_EVENT(rxrpc_conn,
TP_PROTO(unsigned int conn_debug_id, int ref, enum rxrpc_conn_trace why),

diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 82eb09b961a0..c588c0e81f63 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -875,8 +875,8 @@ extern unsigned long rxrpc_conn_idle_client_fast_expiry;
extern struct idr rxrpc_client_conn_ids;

void rxrpc_destroy_client_conn_ids(void);
-struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *);
-void rxrpc_put_bundle(struct rxrpc_bundle *);
+struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace);
+void rxrpc_put_bundle(struct rxrpc_bundle *, enum rxrpc_bundle_trace);
int rxrpc_connect_call(struct rxrpc_sock *, struct rxrpc_call *,
struct rxrpc_conn_parameters *, struct sockaddr_rxrpc *,
gfp_t);
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index 4352e777aa2a..34ff6fa85c32 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -133,31 +133,36 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp,
atomic_set(&bundle->active, 1);
spin_lock_init(&bundle->channel_lock);
INIT_LIST_HEAD(&bundle->waiting_calls);
+ trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_new);
}
return bundle;
}

-struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle)
+struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle,
+ enum rxrpc_bundle_trace why)
{
- refcount_inc(&bundle->ref);
+ int r;
+
+ __refcount_inc(&bundle->ref, &r);
+ trace_rxrpc_bundle(bundle->debug_id, r + 1, why);
return bundle;
}

static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
{
+ trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_free);
rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle);
kfree(bundle);
}

-void rxrpc_put_bundle(struct rxrpc_bundle *bundle)
+void rxrpc_put_bundle(struct rxrpc_bundle *bundle, enum rxrpc_bundle_trace why)
{
- unsigned int d = bundle->debug_id;
+ unsigned int id = bundle->debug_id;
bool dead;
int r;

dead = __refcount_dec_and_test(&bundle->ref, &r);
-
- _debug("PUT B=%x %d", d, r - 1);
+ trace_rxrpc_bundle(id, r - 1, why);
if (dead)
rxrpc_free_bundle(bundle);
}
@@ -206,7 +211,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
list_add_tail(&conn->proc_link, &rxnet->conn_proc_list);
write_unlock(&rxnet->conn_lock);

- rxrpc_get_bundle(bundle);
+ rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_conn);
rxrpc_get_peer(conn->peer, rxrpc_peer_get_client_conn);
rxrpc_get_local(conn->local, rxrpc_local_get_client_conn);
key_get(conn->key);
@@ -342,7 +347,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
candidate->debug_id = atomic_inc_return(&rxrpc_bundle_id);
rb_link_node(&candidate->local_node, parent, pp);
rb_insert_color(&candidate->local_node, &local->client_bundles);
- rxrpc_get_bundle(candidate);
+ rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call);
spin_unlock(&local->client_bundles_lock);
_leave(" = %u [new]", candidate->debug_id);
return candidate;
@@ -350,7 +355,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
found_bundle_free:
rxrpc_free_bundle(candidate);
found_bundle:
- rxrpc_get_bundle(bundle);
+ rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_call);
atomic_inc(&bundle->active);
spin_unlock(&local->client_bundles_lock);
_leave(" = %u [found]", bundle->debug_id);
@@ -740,7 +745,7 @@ int rxrpc_connect_call(struct rxrpc_sock *rx,

out_put_bundle:
rxrpc_deactivate_bundle(bundle);
- rxrpc_put_bundle(bundle);
+ rxrpc_put_bundle(bundle, rxrpc_bundle_get_client_call);
out:
_leave(" = %d", ret);
return ret;
@@ -958,7 +963,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle)

spin_unlock(&local->client_bundles_lock);
if (need_put)
- rxrpc_put_bundle(bundle);
+ rxrpc_put_bundle(bundle, rxrpc_bundle_put_discard);
}
}

diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index bbace8d9953d..f7c271a740ed 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -363,7 +363,7 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)

conn->security->clear(conn);
key_put(conn->key);
- rxrpc_put_bundle(conn->bundle);
+ rxrpc_put_bundle(conn->bundle, rxrpc_bundle_put_conn);
rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn);

if (atomic_dec_and_test(&conn->local->rxnet->nr_conns))
diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
index bf087213bd4d..2c44d67b43dc 100644
--- a/net/rxrpc/conn_service.c
+++ b/net/rxrpc/conn_service.c
@@ -133,7 +133,8 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn
*/
conn->state = RXRPC_CONN_SERVICE_PREALLOC;
refcount_set(&conn->ref, 2);
- conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle);
+ conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle,
+ rxrpc_bundle_get_service_conn);

atomic_inc(&rxnet->nr_conns);
write_lock(&rxnet->conn_lock);


2022-11-23 10:34:33

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 08/13] rxrpc: trace: Don't use __builtin_return_address for rxrpc_local tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_local tracepoint

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 49 ++++++++++++++++++++-------
net/rxrpc/af_rxrpc.c | 8 ++--
net/rxrpc/ar-internal.h | 41 +++++++++++++++++-----
net/rxrpc/call_accept.c | 4 +-
net/rxrpc/call_event.c | 2 +
net/rxrpc/conn_client.c | 2 +
net/rxrpc/conn_event.c | 4 +-
net/rxrpc/conn_object.c | 2 +
net/rxrpc/input.c | 4 +-
net/rxrpc/local_object.c | 77 +++++++++++++++++++++++-------------------
net/rxrpc/output.c | 3 +-
net/rxrpc/peer_event.c | 4 +-
net/rxrpc/peer_object.c | 4 +-
13 files changed, 128 insertions(+), 76 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index 2b77f9a75bf7..015569845b1d 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -32,12 +32,35 @@
E_(rxrpc_skb_unshared_nomem, "US0")

#define rxrpc_local_traces \
- EM(rxrpc_local_got, "GOT") \
- EM(rxrpc_local_new, "NEW") \
- EM(rxrpc_local_processing, "PRO") \
- EM(rxrpc_local_put, "PUT") \
- EM(rxrpc_local_queued, "QUE") \
- E_(rxrpc_local_tx_ack, "TAK")
+ EM(rxrpc_local_free, "FREE ") \
+ EM(rxrpc_local_get_client_conn, "GET conn-cln") \
+ EM(rxrpc_local_get_for_use, "GET for-use ") \
+ EM(rxrpc_local_get_peer, "GET peer ") \
+ EM(rxrpc_local_get_prealloc_conn, "GET conn-pre") \
+ EM(rxrpc_local_get_queue, "GET queue ") \
+ EM(rxrpc_local_new, "NEW ") \
+ EM(rxrpc_local_processing, "PROCESSING ") \
+ EM(rxrpc_local_put_already_queued, "PUT alreadyq") \
+ EM(rxrpc_local_put_bind, "PUT bind ") \
+ EM(rxrpc_local_put_for_use, "PUT for-use ") \
+ EM(rxrpc_local_put_kill_conn, "PUT conn-kil") \
+ EM(rxrpc_local_put_peer, "PUT peer ") \
+ EM(rxrpc_local_put_prealloc_conn, "PUT conn-pre") \
+ EM(rxrpc_local_put_release_sock, "PUT rel-sock") \
+ EM(rxrpc_local_put_queue, "PUT queue ") \
+ EM(rxrpc_local_queued, "QUEUED ") \
+ EM(rxrpc_local_see_tx_ack, "SEE tx-ack ") \
+ EM(rxrpc_local_stop, "STOP ") \
+ EM(rxrpc_local_stopped, "STOPPED ") \
+ EM(rxrpc_local_unuse_bind, "UNU bind ") \
+ EM(rxrpc_local_unuse_conn_work, "UNU conn-wrk") \
+ EM(rxrpc_local_unuse_peer_keepalive, "UNU peer-kpa") \
+ EM(rxrpc_local_unuse_release_sock, "UNU rel-sock") \
+ EM(rxrpc_local_unuse_work, "UNU work ") \
+ EM(rxrpc_local_use_conn_work, "USE conn-wrk") \
+ EM(rxrpc_local_use_lookup, "USE lookup ") \
+ EM(rxrpc_local_use_peer_keepalive, "USE peer-kpa") \
+ E_(rxrpc_local_use_work, "USE work ")

#define rxrpc_peer_traces \
EM(rxrpc_peer_got, "GOT") \
@@ -345,29 +368,29 @@ rxrpc_txqueue_traces;

TRACE_EVENT(rxrpc_local,
TP_PROTO(unsigned int local_debug_id, enum rxrpc_local_trace op,
- int usage, const void *where),
+ int ref, int usage),

- TP_ARGS(local_debug_id, op, usage, where),
+ TP_ARGS(local_debug_id, op, ref, usage),

TP_STRUCT__entry(
__field(unsigned int, local )
__field(int, op )
+ __field(int, ref )
__field(int, usage )
- __field(const void *, where )
),

TP_fast_assign(
__entry->local = local_debug_id;
__entry->op = op;
+ __entry->ref = ref;
__entry->usage = usage;
- __entry->where = where;
),

- TP_printk("L=%08x %s u=%d sp=%pSR",
+ TP_printk("L=%08x %s r=%d u=%d",
__entry->local,
__print_symbolic(__entry->op, rxrpc_local_traces),
- __entry->usage,
- __entry->where)
+ __entry->ref,
+ __entry->usage)
);

TRACE_EVENT(rxrpc_peer,
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index aacdd96a9886..989ebca899f3 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -194,8 +194,8 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)

service_in_use:
write_unlock(&local->services_lock);
- rxrpc_unuse_local(local);
- rxrpc_put_local(local);
+ rxrpc_unuse_local(local, rxrpc_local_unuse_bind);
+ rxrpc_put_local(local, rxrpc_local_put_bind);
ret = -EADDRINUSE;
error_unlock:
release_sock(&rx->sk);
@@ -888,8 +888,8 @@ static int rxrpc_release_sock(struct sock *sk)
flush_workqueue(rxrpc_workqueue);
rxrpc_purge_queue(&sk->sk_receive_queue);

- rxrpc_unuse_local(rx->local);
- rxrpc_put_local(rx->local);
+ rxrpc_unuse_local(rx->local, rxrpc_local_unuse_release_sock);
+ rxrpc_put_local(rx->local, rxrpc_local_put_release_sock);
rx->local = NULL;
key_put(rx->key);
rx->key = NULL;
diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 7c48b0163032..dde9ce21ef48 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -979,22 +979,45 @@ extern void rxrpc_process_local_events(struct rxrpc_local *);
* local_object.c
*/
struct rxrpc_local *rxrpc_lookup_local(struct net *, const struct sockaddr_rxrpc *);
-struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *);
-struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *);
-void rxrpc_put_local(struct rxrpc_local *);
-struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *);
-void rxrpc_unuse_local(struct rxrpc_local *);
+struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *, enum rxrpc_local_trace);
+struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *, enum rxrpc_local_trace);
+void rxrpc_put_local(struct rxrpc_local *, enum rxrpc_local_trace);
+struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *, enum rxrpc_local_trace);
+void rxrpc_unuse_local(struct rxrpc_local *, enum rxrpc_local_trace);
void rxrpc_queue_local(struct rxrpc_local *);
void rxrpc_destroy_all_locals(struct rxrpc_net *);

-static inline bool __rxrpc_unuse_local(struct rxrpc_local *local)
+static inline bool __rxrpc_unuse_local(struct rxrpc_local *local,
+ enum rxrpc_local_trace why)
{
- return atomic_dec_return(&local->active_users) == 0;
+ unsigned int debug_id = local->debug_id;
+ int r, u;
+
+ r = refcount_read(&local->ref);
+ u = atomic_dec_return(&local->active_users);
+ trace_rxrpc_local(debug_id, why, r, u);
+ return u == 0;
+}
+
+static inline bool __rxrpc_use_local(struct rxrpc_local *local,
+ enum rxrpc_local_trace why)
+{
+ int r, u;
+
+ r = refcount_read(&local->ref);
+ u = atomic_fetch_add_unless(&local->active_users, 1, 0);
+ trace_rxrpc_local(local->debug_id, why, r, u);
+ return u != 0;
}

-static inline bool __rxrpc_use_local(struct rxrpc_local *local)
+static inline void rxrpc_see_local(struct rxrpc_local *local,
+ enum rxrpc_local_trace why)
{
- return atomic_fetch_add_unless(&local->active_users, 1, 0) != 0;
+ int r, u;
+
+ r = refcount_read(&local->ref);
+ u = atomic_read(&local->active_users);
+ trace_rxrpc_local(local->debug_id, why, r, u);
}

/*
diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index 4888959e4727..1b12d4e28373 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -197,7 +197,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
tail = b->peer_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) {
struct rxrpc_peer *peer = b->peer_backlog[tail];
- rxrpc_put_local(peer->local);
+ rxrpc_put_local(peer->local, rxrpc_local_put_prealloc_conn);
kfree(peer);
tail = (tail + 1) & (size - 1);
}
@@ -305,7 +305,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
b->conn_backlog[conn_tail] = NULL;
smp_store_release(&b->conn_backlog_tail,
(conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
- conn->local = rxrpc_get_local(local);
+ conn->local = rxrpc_get_local(local, rxrpc_local_get_prealloc_conn);
conn->peer = peer;
rxrpc_see_connection(conn);
rxrpc_new_incoming_connection(rx, conn, sec, skb);
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 75e7c2903b2e..8dfcde127a1d 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -114,7 +114,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason,
if (in_task()) {
rxrpc_transmit_ack_packets(call->peer->local);
} else {
- rxrpc_get_local(local);
+ rxrpc_get_local(local, rxrpc_local_get_queue);
rxrpc_queue_local(local);
}
}
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index 71404b33623f..9a69b4c1b182 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -208,7 +208,7 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)

rxrpc_get_bundle(bundle);
rxrpc_get_peer(conn->peer);
- rxrpc_get_local(conn->local);
+ rxrpc_get_local(conn->local, rxrpc_local_get_client_conn);
key_get(conn->key);

trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client,
diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index f890a30c4df6..225edaf019f1 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -474,9 +474,9 @@ void rxrpc_process_connection(struct work_struct *work)

rxrpc_see_connection(conn);

- if (__rxrpc_use_local(conn->local)) {
+ if (__rxrpc_use_local(conn->local, rxrpc_local_use_conn_work)) {
rxrpc_do_process_connection(conn);
- rxrpc_unuse_local(conn->local);
+ rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work);
}

rxrpc_put_connection(conn);
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index ad6e5ee1f069..725359afeac0 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -366,7 +366,7 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)

if (atomic_dec_and_test(&conn->local->rxnet->nr_conns))
wake_up_var(&conn->local->rxnet->nr_conns);
- rxrpc_put_local(conn->local);
+ rxrpc_put_local(conn->local, rxrpc_local_put_kill_conn);

kfree(conn);
_leave("");
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index 42c8257158f7..cecfd201d832 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -1133,7 +1133,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local,
{
_enter("%p,%p", local, skb);

- if (rxrpc_get_local_maybe(local)) {
+ if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) {
skb_queue_tail(&local->event_queue, skb);
rxrpc_queue_local(local);
} else {
@@ -1146,7 +1146,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local,
*/
static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
{
- if (rxrpc_get_local_maybe(local)) {
+ if (rxrpc_get_local_maybe(local, rxrpc_local_get_queue)) {
skb_queue_tail(&local->reject_queue, skb);
rxrpc_queue_local(local);
} else {
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index 11080c335d42..1916acda7c27 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -110,7 +110,7 @@ static struct rxrpc_local *rxrpc_alloc_local(struct rxrpc_net *rxnet,
local->debug_id = atomic_inc_return(&rxrpc_debug_id);
memcpy(&local->srx, srx, sizeof(*srx));
local->srx.srx_service = 0;
- trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, NULL);
+ trace_rxrpc_local(local->debug_id, rxrpc_local_new, 1, 1);
}

_leave(" = %p", local);
@@ -228,7 +228,7 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
* we're attempting to use a local address that the dying
* object is still using.
*/
- if (!rxrpc_use_local(local))
+ if (!rxrpc_use_local(local, rxrpc_local_use_lookup))
break;

goto found;
@@ -272,32 +272,32 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
/*
* Get a ref on a local endpoint.
*/
-struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local)
+struct rxrpc_local *rxrpc_get_local(struct rxrpc_local *local,
+ enum rxrpc_local_trace why)
{
- const void *here = __builtin_return_address(0);
- int r;
+ int r, u;

+ u = atomic_read(&local->active_users);
__refcount_inc(&local->ref, &r);
- trace_rxrpc_local(local->debug_id, rxrpc_local_got, r + 1, here);
+ trace_rxrpc_local(local->debug_id, why, r + 1, u);
return local;
}

/*
* Get a ref on a local endpoint unless its usage has already reached 0.
*/
-struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local)
+struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local,
+ enum rxrpc_local_trace why)
{
- const void *here = __builtin_return_address(0);
- int r;
+ int r, u;

- if (local) {
- if (__refcount_inc_not_zero(&local->ref, &r))
- trace_rxrpc_local(local->debug_id, rxrpc_local_got,
- r + 1, here);
- else
- local = NULL;
+ if (local && __refcount_inc_not_zero(&local->ref, &r)) {
+ u = atomic_read(&local->active_users);
+ trace_rxrpc_local(local->debug_id, why, r + 1, u);
+ return local;
}
- return local;
+
+ return NULL;
}

/*
@@ -305,31 +305,31 @@ struct rxrpc_local *rxrpc_get_local_maybe(struct rxrpc_local *local)
*/
void rxrpc_queue_local(struct rxrpc_local *local)
{
- const void *here = __builtin_return_address(0);
unsigned int debug_id = local->debug_id;
int r = refcount_read(&local->ref);
+ int u = atomic_read(&local->active_users);

if (rxrpc_queue_work(&local->processor))
- trace_rxrpc_local(debug_id, rxrpc_local_queued, r + 1, here);
+ trace_rxrpc_local(debug_id, rxrpc_local_queued, r, u);
else
- rxrpc_put_local(local);
+ rxrpc_put_local(local, rxrpc_local_put_already_queued);
}

/*
* Drop a ref on a local endpoint.
*/
-void rxrpc_put_local(struct rxrpc_local *local)
+void rxrpc_put_local(struct rxrpc_local *local, enum rxrpc_local_trace why)
{
- const void *here = __builtin_return_address(0);
unsigned int debug_id;
bool dead;
- int r;
+ int r, u;

if (local) {
debug_id = local->debug_id;

+ u = atomic_read(&local->active_users);
dead = __refcount_dec_and_test(&local->ref, &r);
- trace_rxrpc_local(debug_id, rxrpc_local_put, r, here);
+ trace_rxrpc_local(debug_id, why, r, u);

if (dead)
call_rcu(&local->rcu, rxrpc_local_rcu);
@@ -339,14 +339,15 @@ void rxrpc_put_local(struct rxrpc_local *local)
/*
* Start using a local endpoint.
*/
-struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local)
+struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local,
+ enum rxrpc_local_trace why)
{
- local = rxrpc_get_local_maybe(local);
+ local = rxrpc_get_local_maybe(local, rxrpc_local_get_for_use);
if (!local)
return NULL;

- if (!__rxrpc_use_local(local)) {
- rxrpc_put_local(local);
+ if (!__rxrpc_use_local(local, why)) {
+ rxrpc_put_local(local, rxrpc_local_put_for_use);
return NULL;
}

@@ -357,11 +358,17 @@ struct rxrpc_local *rxrpc_use_local(struct rxrpc_local *local)
* Cease using a local endpoint. Once the number of active users reaches 0, we
* start the closure of the transport in the work processor.
*/
-void rxrpc_unuse_local(struct rxrpc_local *local)
+void rxrpc_unuse_local(struct rxrpc_local *local, enum rxrpc_local_trace why)
{
+ unsigned int debug_id = local->debug_id;
+ int r, u;
+
if (local) {
- if (__rxrpc_unuse_local(local)) {
- rxrpc_get_local(local);
+ r = refcount_read(&local->ref);
+ u = atomic_dec_return(&local->active_users);
+ trace_rxrpc_local(debug_id, why, r, u);
+ if (u == 0) {
+ rxrpc_get_local(local, rxrpc_local_get_queue);
rxrpc_queue_local(local);
}
}
@@ -418,12 +425,11 @@ static void rxrpc_local_processor(struct work_struct *work)
if (local->dead)
return;

- trace_rxrpc_local(local->debug_id, rxrpc_local_processing,
- refcount_read(&local->ref), NULL);
+ rxrpc_see_local(local, rxrpc_local_processing);

do {
again = false;
- if (!__rxrpc_use_local(local)) {
+ if (!__rxrpc_use_local(local, rxrpc_local_use_work)) {
rxrpc_local_destroyer(local);
break;
}
@@ -443,10 +449,10 @@ static void rxrpc_local_processor(struct work_struct *work)
again = true;
}

- __rxrpc_unuse_local(local);
+ __rxrpc_unuse_local(local, rxrpc_local_unuse_work);
} while (again);

- rxrpc_put_local(local);
+ rxrpc_put_local(local, rxrpc_local_put_queue);
}

/*
@@ -460,6 +466,7 @@ static void rxrpc_local_rcu(struct rcu_head *rcu)

ASSERT(!work_pending(&local->processor));

+ rxrpc_see_local(local, rxrpc_local_free);
kfree(local);
_leave("");
}
diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
index b5d8eac8c49c..2762b7ada9ae 100644
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -288,8 +288,7 @@ void rxrpc_transmit_ack_packets(struct rxrpc_local *local)
LIST_HEAD(queue);
int ret;

- trace_rxrpc_local(local->debug_id, rxrpc_local_tx_ack,
- refcount_read(&local->ref), NULL);
+ rxrpc_see_local(local, rxrpc_local_see_tx_ack);

if (list_empty(&local->ack_tx_queue))
return;
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index ad4d1769e02b..3f8d104ecaa7 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -266,7 +266,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
if (!rxrpc_get_peer_maybe(peer))
continue;

- if (__rxrpc_use_local(peer->local)) {
+ if (__rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive)) {
spin_unlock_bh(&rxnet->peer_hash_lock);

keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME;
@@ -289,7 +289,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
spin_lock_bh(&rxnet->peer_hash_lock);
list_add_tail(&peer->keepalive_link,
&rxnet->peer_keepalive[slot & mask]);
- rxrpc_unuse_local(peer->local);
+ rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive);
}
rxrpc_put_peer_locked(peer);
}
diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
index b3c3c1c344fc..bcef897560e7 100644
--- a/net/rxrpc/peer_object.c
+++ b/net/rxrpc/peer_object.c
@@ -215,7 +215,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
peer = kzalloc(sizeof(struct rxrpc_peer), gfp);
if (peer) {
refcount_set(&peer->ref, 1);
- peer->local = rxrpc_get_local(local);
+ peer->local = rxrpc_get_local(local, rxrpc_local_get_peer);
INIT_HLIST_HEAD(&peer->error_targets);
peer->service_conns = RB_ROOT;
seqlock_init(&peer->service_conn_lock);
@@ -294,7 +294,7 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx,

static void rxrpc_free_peer(struct rxrpc_peer *peer)
{
- rxrpc_put_local(peer->local);
+ rxrpc_put_local(peer->local, rxrpc_local_put_peer);
kfree_rcu(peer, rcu);
}



2022-11-23 10:36:25

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 11/13] rxrpc: trace: Don't use __builtin_return_address for rxrpc_call tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_call tracepoint

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 84 +++++++++++++++++++++--------------
net/rxrpc/ar-internal.h | 8 ++-
net/rxrpc/call_accept.c | 16 +++----
net/rxrpc/call_event.c | 10 ++--
net/rxrpc/call_object.c | 102 ++++++++++++++++++------------------------
net/rxrpc/conn_client.c | 2 -
net/rxrpc/input.c | 8 ++-
net/rxrpc/output.c | 2 -
net/rxrpc/peer_event.c | 2 -
net/rxrpc/recvmsg.c | 8 ++-
net/rxrpc/sendmsg.c | 4 +-
11 files changed, 123 insertions(+), 123 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index e09568a8c173..34151d17e7b6 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -127,26 +127,45 @@
E_(rxrpc_client_to_idle, "->Idle")

#define rxrpc_call_traces \
- EM(rxrpc_call_connected, "CON") \
- EM(rxrpc_call_error, "*E*") \
- EM(rxrpc_call_got, "GOT") \
- EM(rxrpc_call_got_kernel, "Gke") \
- EM(rxrpc_call_got_timer, "GTM") \
- EM(rxrpc_call_got_tx, "Gtx") \
- EM(rxrpc_call_got_userid, "Gus") \
- EM(rxrpc_call_new_client, "NWc") \
- EM(rxrpc_call_new_service, "NWs") \
- EM(rxrpc_call_put, "PUT") \
- EM(rxrpc_call_put_kernel, "Pke") \
- EM(rxrpc_call_put_noqueue, "PnQ") \
- EM(rxrpc_call_put_notimer, "PnT") \
- EM(rxrpc_call_put_timer, "PTM") \
- EM(rxrpc_call_put_tx, "Ptx") \
- EM(rxrpc_call_put_userid, "Pus") \
- EM(rxrpc_call_queued, "QUE") \
- EM(rxrpc_call_queued_ref, "QUR") \
- EM(rxrpc_call_release, "RLS") \
- E_(rxrpc_call_seen, "SEE")
+ EM(rxrpc_call_get_input, "GET input ") \
+ EM(rxrpc_call_get_kernel_service, "GET krnl-srv") \
+ EM(rxrpc_call_get_notify_socket, "GET notify ") \
+ EM(rxrpc_call_get_recvmsg, "GET recvmsg ") \
+ EM(rxrpc_call_get_release_sock, "GET rel-sock") \
+ EM(rxrpc_call_get_retrans, "GET retrans ") \
+ EM(rxrpc_call_get_sendmsg, "GET sendmsg ") \
+ EM(rxrpc_call_get_send_ack, "GET send-ack") \
+ EM(rxrpc_call_get_timer, "GET timer ") \
+ EM(rxrpc_call_get_userid, "GET user-id ") \
+ EM(rxrpc_call_new_client, "NEW client ") \
+ EM(rxrpc_call_new_prealloc_service, "NEW prealloc") \
+ EM(rxrpc_call_put_already_queued, "PUT alreadyq") \
+ EM(rxrpc_call_put_discard_prealloc, "PUT disc-pre") \
+ EM(rxrpc_call_put_input, "PUT input ") \
+ EM(rxrpc_call_put_kernel, "PUT kernel ") \
+ EM(rxrpc_call_put_recvmsg, "PUT recvmsg ") \
+ EM(rxrpc_call_put_release_sock, "PUT rls-sock") \
+ EM(rxrpc_call_put_release_sock_tba, "PUT rls-sk-a") \
+ EM(rxrpc_call_put_send_ack, "PUT send-ack") \
+ EM(rxrpc_call_put_sendmsg, "PUT sendmsg ") \
+ EM(rxrpc_call_put_timer, "PUT timer ") \
+ EM(rxrpc_call_put_timer_already, "PUT timer-al") \
+ EM(rxrpc_call_put_unnotify, "PUT unnotify") \
+ EM(rxrpc_call_put_userid_exists, "PUT u-exists") \
+ EM(rxrpc_call_put_work, "PUT work ") \
+ EM(rxrpc_call_queue_abort, "QUE abort ") \
+ EM(rxrpc_call_queue_requeue, "QUE requeue ") \
+ EM(rxrpc_call_queue_resend, "QUE resend ") \
+ EM(rxrpc_call_queue_timer, "QUE timer ") \
+ EM(rxrpc_call_see_accept, "SEE accept ") \
+ EM(rxrpc_call_see_activate_client, "SEE act-clnt") \
+ EM(rxrpc_call_see_connect_failed, "SEE con-fail") \
+ EM(rxrpc_call_see_connected, "SEE connect ") \
+ EM(rxrpc_call_see_distribute_error, "SEE dist-err") \
+ EM(rxrpc_call_see_input, "SEE input ") \
+ EM(rxrpc_call_see_release, "SEE release ") \
+ EM(rxrpc_call_see_userid_exists, "SEE u-exists") \
+ E_(rxrpc_call_see_zap, "SEE zap ")

#define rxrpc_txqueue_traces \
EM(rxrpc_txqueue_await_reply, "AWR") \
@@ -503,32 +522,29 @@ TRACE_EVENT(rxrpc_client,
);

TRACE_EVENT(rxrpc_call,
- TP_PROTO(unsigned int call_debug_id, enum rxrpc_call_trace op,
- int usage, const void *where, const void *aux),
+ TP_PROTO(unsigned int call_debug_id, int ref, unsigned long aux,
+ enum rxrpc_call_trace why),

- TP_ARGS(call_debug_id, op, usage, where, aux),
+ TP_ARGS(call_debug_id, ref, aux, why),

TP_STRUCT__entry(
__field(unsigned int, call )
- __field(int, op )
- __field(int, usage )
- __field(const void *, where )
- __field(const void *, aux )
+ __field(int, ref )
+ __field(int, why )
+ __field(unsigned long, aux )
),

TP_fast_assign(
__entry->call = call_debug_id;
- __entry->op = op;
- __entry->usage = usage;
- __entry->where = where;
+ __entry->ref = ref;
+ __entry->why = why;
__entry->aux = aux;
),

- TP_printk("c=%08x %s u=%d sp=%pSR a=%p",
+ TP_printk("c=%08x %s r=%d a=%lx",
__entry->call,
- __print_symbolic(__entry->op, rxrpc_call_traces),
- __entry->usage,
- __entry->where,
+ __print_symbolic(__entry->why, rxrpc_call_traces),
+ __entry->ref,
__entry->aux)
);

diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index bc8281c410c5..82eb09b961a0 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -847,10 +847,10 @@ void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *,
struct sk_buff *);
void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *);
void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
-bool __rxrpc_queue_call(struct rxrpc_call *);
-bool rxrpc_queue_call(struct rxrpc_call *);
-void rxrpc_see_call(struct rxrpc_call *);
-bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op);
+bool __rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace);
+bool rxrpc_queue_call(struct rxrpc_call *, enum rxrpc_call_trace);
+void rxrpc_see_call(struct rxrpc_call *, enum rxrpc_call_trace);
+bool rxrpc_try_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_cleanup_call(struct rxrpc_call *);
diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index 04b52e28e0cc..dd4ca4bee77f 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -38,7 +38,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
unsigned long user_call_ID, gfp_t gfp,
unsigned int debug_id)
{
- const void *here = __builtin_return_address(0);
struct rxrpc_call *call, *xcall;
struct rxrpc_net *rxnet = rxrpc_net(sock_net(&rx->sk));
struct rb_node *parent, **pp;
@@ -102,9 +101,8 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
call->flags |= (1 << RXRPC_CALL_IS_SERVICE);
call->state = RXRPC_CALL_SERVER_PREALLOC;

- trace_rxrpc_call(call->debug_id, rxrpc_call_new_service,
- refcount_read(&call->ref),
- here, (const void *)user_call_ID);
+ trace_rxrpc_call(call->debug_id, refcount_read(&call->ref),
+ user_call_ID, rxrpc_call_new_prealloc_service);

write_lock(&rx->call_lock);

@@ -125,11 +123,11 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
call->user_call_ID = user_call_ID;
call->notify_rx = notify_rx;
if (user_attach_call) {
- rxrpc_get_call(call, rxrpc_call_got_kernel);
+ rxrpc_get_call(call, rxrpc_call_get_kernel_service);
user_attach_call(call, user_call_ID);
}

- rxrpc_get_call(call, rxrpc_call_got_userid);
+ rxrpc_get_call(call, rxrpc_call_get_userid);
rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls);
set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
@@ -229,7 +227,7 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
}
rxrpc_call_completed(call);
rxrpc_release_call(rx, call);
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_discard_prealloc);
tail = (tail + 1) & (size - 1);
}

@@ -318,7 +316,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
smp_store_release(&b->call_backlog_tail,
(call_tail + 1) & (RXRPC_BACKLOG_MAX - 1));

- rxrpc_see_call(call);
+ rxrpc_see_call(call, rxrpc_call_see_accept);
call->conn = conn;
call->security = conn->security;
call->security_ix = conn->security_ix;
@@ -430,7 +428,7 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
* (recvmsg queue, to-be-accepted queue or user ID tree) or the kernel
* service to prevent the call from being deallocated too early.
*/
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_discard_prealloc);

_leave(" = %p{%d}", call, call->debug_id);
return call;
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 8dfcde127a1d..7ea26fc358a0 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -101,7 +101,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason,
txb->ack.reason = ack_reason;
txb->ack.nAcks = 0;

- if (!rxrpc_try_get_call(call, rxrpc_call_got)) {
+ if (!rxrpc_try_get_call(call, rxrpc_call_get_send_ack)) {
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_nomem);
return;
}
@@ -198,7 +198,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)

if (list_empty(&txb->tx_link)) {
rxrpc_get_txbuf(txb, rxrpc_txbuf_get_retrans);
- rxrpc_get_call(call, rxrpc_call_got_tx);
+ rxrpc_get_call(call, rxrpc_call_get_retrans);
list_add_tail(&txb->tx_link, &retrans_queue);
set_bit(RXRPC_TXBUF_RESENT, &txb->flags);
}
@@ -303,7 +303,7 @@ void rxrpc_process_call(struct work_struct *work)
unsigned int iterations = 0;
rxrpc_serial_t ackr_serial;

- rxrpc_see_call(call);
+ rxrpc_see_call(call, rxrpc_call_see_input);

//printk("\n--------------------\n");
_enter("{%d,%s,%lx}",
@@ -437,12 +437,12 @@ void rxrpc_process_call(struct work_struct *work)
goto requeue;

out_put:
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_work);
out:
_leave("");
return;

requeue:
- __rxrpc_queue_call(call);
+ __rxrpc_queue_call(call, rxrpc_call_queue_requeue);
goto out;
}
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 29ec4013aa0b..afd957f6dc1c 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -53,9 +53,9 @@ static void rxrpc_call_timer_expired(struct timer_list *t)

if (call->state < RXRPC_CALL_COMPLETE) {
trace_rxrpc_timer_expired(call, jiffies);
- __rxrpc_queue_call(call);
+ __rxrpc_queue_call(call, rxrpc_call_queue_timer);
} else {
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_already_queued);
}
}

@@ -64,10 +64,10 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call,
unsigned long now,
enum rxrpc_timer_trace why)
{
- if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) {
+ if (rxrpc_try_get_call(call, rxrpc_call_get_timer)) {
trace_rxrpc_timer(call, why, now);
if (timer_reduce(&call->timer, expire_at))
- rxrpc_put_call(call, rxrpc_call_put_notimer);
+ rxrpc_put_call(call, rxrpc_call_put_timer_already);
}
}

@@ -110,7 +110,7 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx,
return NULL;

found_extant_call:
- rxrpc_get_call(call, rxrpc_call_got);
+ rxrpc_get_call(call, rxrpc_call_get_sendmsg);
read_unlock(&rx->call_lock);
_leave(" = %p [%d]", call, refcount_read(&call->ref));
return call;
@@ -270,7 +270,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
struct rxrpc_net *rxnet;
struct semaphore *limiter;
struct rb_node *parent, **pp;
- const void *here = __builtin_return_address(0);
int ret;

_enter("%p,%lx", rx, p->user_call_ID);
@@ -291,9 +290,8 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,

call->interruptibility = p->interruptibility;
call->tx_total_len = p->tx_total_len;
- trace_rxrpc_call(call->debug_id, rxrpc_call_new_client,
- refcount_read(&call->ref),
- here, (const void *)p->user_call_ID);
+ trace_rxrpc_call(call->debug_id, refcount_read(&call->ref),
+ p->user_call_ID, rxrpc_call_new_client);
if (p->kernel)
__set_bit(RXRPC_CALL_KERNEL, &call->flags);

@@ -322,7 +320,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
rcu_assign_pointer(call->socket, rx);
call->user_call_ID = p->user_call_ID;
__set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
- rxrpc_get_call(call, rxrpc_call_got_userid);
+ rxrpc_get_call(call, rxrpc_call_get_userid);
rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls);
list_add(&call->sock_link, &rx->sock_calls);
@@ -344,8 +342,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
if (ret < 0)
goto error_attached_to_socket;

- trace_rxrpc_call(call->debug_id, rxrpc_call_connected,
- refcount_read(&call->ref), here, NULL);
+ rxrpc_see_call(call, rxrpc_call_see_connected);

rxrpc_start_call_timer(call);

@@ -362,11 +359,11 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
release_sock(&rx->sk);
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
RX_CALL_DEAD, -EEXIST);
- trace_rxrpc_call(call->debug_id, rxrpc_call_error,
- refcount_read(&call->ref), here, ERR_PTR(-EEXIST));
+ trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0,
+ rxrpc_call_see_userid_exists);
rxrpc_release_call(rx, call);
mutex_unlock(&call->user_mutex);
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_userid_exists);
_leave(" = -EEXIST");
return ERR_PTR(-EEXIST);

@@ -376,8 +373,8 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
* leave the error to recvmsg() to deal with.
*/
error_attached_to_socket:
- trace_rxrpc_call(call->debug_id, rxrpc_call_error,
- refcount_read(&call->ref), here, ERR_PTR(ret));
+ trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), ret,
+ rxrpc_call_see_connect_failed);
set_bit(RXRPC_CALL_DISCONNECTED, &call->flags);
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
RX_CALL_DEAD, ret);
@@ -428,72 +425,65 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
/*
* Queue a call's work processor, getting a ref to pass to the work queue.
*/
-bool rxrpc_queue_call(struct rxrpc_call *call)
+bool rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{
- const void *here = __builtin_return_address(0);
int n;

if (!__refcount_inc_not_zero(&call->ref, &n))
return false;
if (rxrpc_queue_work(&call->processor))
- trace_rxrpc_call(call->debug_id, rxrpc_call_queued, n + 1,
- here, NULL);
+ trace_rxrpc_call(call->debug_id, n + 1, 0, why);
else
- rxrpc_put_call(call, rxrpc_call_put_noqueue);
+ rxrpc_put_call(call, rxrpc_call_put_already_queued);
return true;
}

/*
* Queue a call's work processor, passing the callers ref to the work queue.
*/
-bool __rxrpc_queue_call(struct rxrpc_call *call)
+bool __rxrpc_queue_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{
- const void *here = __builtin_return_address(0);
int n = refcount_read(&call->ref);
+
ASSERTCMP(n, >=, 1);
if (rxrpc_queue_work(&call->processor))
- trace_rxrpc_call(call->debug_id, rxrpc_call_queued_ref, n,
- here, NULL);
+ trace_rxrpc_call(call->debug_id, n, 0, why);
else
- rxrpc_put_call(call, rxrpc_call_put_noqueue);
+ rxrpc_put_call(call, rxrpc_call_put_already_queued);
return true;
}

/*
* Note the re-emergence of a call.
*/
-void rxrpc_see_call(struct rxrpc_call *call)
+void rxrpc_see_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{
- const void *here = __builtin_return_address(0);
if (call) {
- int n = refcount_read(&call->ref);
+ int r = refcount_read(&call->ref);

- trace_rxrpc_call(call->debug_id, rxrpc_call_seen, n,
- here, NULL);
+ trace_rxrpc_call(call->debug_id, r, 0, why);
}
}

-bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{
- const void *here = __builtin_return_address(0);
- int n;
+ int r;

- if (!__refcount_inc_not_zero(&call->ref, &n))
+ if (!__refcount_inc_not_zero(&call->ref, &r))
return false;
- trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL);
+ trace_rxrpc_call(call->debug_id, r + 1, 0, why);
return true;
}

/*
* Note the addition of a ref on a call.
*/
-void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{
- const void *here = __builtin_return_address(0);
- int n;
+ int r;

- __refcount_inc(&call->ref, &n);
- trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL);
+ __refcount_inc(&call->ref, &r);
+ trace_rxrpc_call(call->debug_id, r + 1, 0, why);
}

/*
@@ -510,15 +500,13 @@ static void rxrpc_cleanup_ring(struct rxrpc_call *call)
*/
void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
{
- const void *here = __builtin_return_address(0);
struct rxrpc_connection *conn = call->conn;
bool put = false;

_enter("{%d,%d}", call->debug_id, refcount_read(&call->ref));

- trace_rxrpc_call(call->debug_id, rxrpc_call_release,
- refcount_read(&call->ref),
- here, (const void *)call->flags);
+ trace_rxrpc_call(call->debug_id, refcount_read(&call->ref),
+ call->flags, rxrpc_call_see_release);

ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);

@@ -544,14 +532,14 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)

write_unlock_bh(&rx->recvmsg_lock);
if (put)
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_unnotify);

write_lock(&rx->call_lock);

if (test_and_clear_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
rb_erase(&call->sock_node, &rx->calls);
memset(&call->sock_node, 0xdd, sizeof(call->sock_node));
- rxrpc_put_call(call, rxrpc_call_put_userid);
+ rxrpc_put_call(call, rxrpc_call_put_userid_exists);
}

list_del(&call->sock_link);
@@ -580,17 +568,17 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
struct rxrpc_call, accept_link);
list_del(&call->accept_link);
rxrpc_abort_call("SKR", call, 0, RX_CALL_DEAD, -ECONNRESET);
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_release_sock_tba);
}

while (!list_empty(&rx->sock_calls)) {
call = list_entry(rx->sock_calls.next,
struct rxrpc_call, sock_link);
- rxrpc_get_call(call, rxrpc_call_got);
+ rxrpc_get_call(call, rxrpc_call_get_release_sock);
rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, -ECONNRESET);
rxrpc_send_abort_packet(call);
rxrpc_release_call(rx, call);
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_release_sock);
}

_leave("");
@@ -599,20 +587,18 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
/*
* release a call
*/
-void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{
struct rxrpc_net *rxnet = call->rxnet;
- const void *here = __builtin_return_address(0);
unsigned int debug_id = call->debug_id;
bool dead;
- int n;
+ int r;

ASSERT(call != NULL);

- dead = __refcount_dec_and_test(&call->ref, &n);
- trace_rxrpc_call(debug_id, op, n, here, NULL);
+ dead = __refcount_dec_and_test(&call->ref, &r);
+ trace_rxrpc_call(debug_id, r - 1, 0, why);
if (dead) {
- _debug("call %d dead", call->debug_id);
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);

if (!list_empty(&call->link)) {
@@ -701,7 +687,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
struct rxrpc_call, link);
_debug("Zapping call %p", call);

- rxrpc_see_call(call);
+ rxrpc_see_call(call, rxrpc_call_see_zap);
list_del_init(&call->link);

pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index dcfec6a45255..4352e777aa2a 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -540,7 +540,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,
clear_bit(RXRPC_CONN_FINAL_ACK_0 + channel, &conn->flags);
clear_bit(conn->bundle_shift + channel, &bundle->avail_chans);

- rxrpc_see_call(call);
+ rxrpc_see_call(call, rxrpc_call_see_activate_client);
list_del_init(&call->chan_wait_link);
call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call);
call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call);
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index c8ff7489b412..09b44cd11c9b 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -14,7 +14,7 @@ static void rxrpc_proto_abort(const char *why,
{
if (rxrpc_abort_call(why, call, seq, RX_PROTOCOL_ERROR, -EBADMSG)) {
set_bit(RXRPC_CALL_EV_ABORT, &call->events);
- rxrpc_queue_call(call);
+ rxrpc_queue_call(call, rxrpc_call_queue_abort);
}
}

@@ -175,7 +175,7 @@ static void rxrpc_congestion_management(struct rxrpc_call *call,
call->cong_cumul_acks = cumulative_acks;
trace_rxrpc_congest(call, summary, acked_serial, change);
if (resend && !test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events))
- rxrpc_queue_call(call);
+ rxrpc_queue_call(call, rxrpc_call_queue_resend);
return;

packet_loss_detected:
@@ -678,7 +678,7 @@ static void rxrpc_input_check_for_lost_ack(struct rxrpc_call *call)
{
if (after(call->acks_lost_top, call->acks_prev_seq) &&
!test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events))
- rxrpc_queue_call(call);
+ rxrpc_queue_call(call, rxrpc_call_queue_resend);
}

/*
@@ -1099,7 +1099,7 @@ static void rxrpc_input_implicit_end_call(struct rxrpc_sock *rx,
default:
if (rxrpc_abort_call("IMP", call, 0, RX_CALL_DEAD, -ESHUTDOWN)) {
set_bit(RXRPC_CALL_EV_ABORT, &call->events);
- rxrpc_queue_call(call);
+ rxrpc_queue_call(call, rxrpc_call_queue_abort);
}
trace_rxrpc_improper_term(call);
break;
diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
index 2762b7ada9ae..d324e88f7642 100644
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -310,7 +310,7 @@ void rxrpc_transmit_ack_packets(struct rxrpc_local *local)
}

list_del_init(&txb->tx_link);
- rxrpc_put_call(txb->call, rxrpc_call_put);
+ rxrpc_put_call(txb->call, rxrpc_call_put_send_ack);
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx);
}
}
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index 5e97d321ac38..b28739d10927 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -238,7 +238,7 @@ static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error,
struct rxrpc_call *call;

hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) {
- rxrpc_see_call(call);
+ rxrpc_see_call(call, rxrpc_call_see_distribute_error);
rxrpc_set_call_completion(call, compl, 0, -error);
}
}
diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index 134122f5961a..c84d2b620396 100644
--- a/net/rxrpc/recvmsg.c
+++ b/net/rxrpc/recvmsg.c
@@ -42,7 +42,7 @@ void rxrpc_notify_socket(struct rxrpc_call *call)
} else {
write_lock_bh(&rx->recvmsg_lock);
if (list_empty(&call->recvmsg_link)) {
- rxrpc_get_call(call, rxrpc_call_got);
+ rxrpc_get_call(call, rxrpc_call_get_notify_socket);
list_add_tail(&call->recvmsg_link, &rx->recvmsg_q);
}
write_unlock_bh(&rx->recvmsg_lock);
@@ -451,7 +451,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
if (!(flags & MSG_PEEK))
list_del_init(&call->recvmsg_link);
else
- rxrpc_get_call(call, rxrpc_call_got);
+ rxrpc_get_call(call, rxrpc_call_get_recvmsg);
write_unlock_bh(&rx->recvmsg_lock);

trace_rxrpc_recvmsg(call, rxrpc_recvmsg_dequeue, 0);
@@ -537,7 +537,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,

error_unlock_call:
mutex_unlock(&call->user_mutex);
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_recvmsg);
trace_rxrpc_recvmsg(call, rxrpc_recvmsg_return, ret);
return ret;

@@ -548,7 +548,7 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
write_unlock_bh(&rx->recvmsg_lock);
trace_rxrpc_recvmsg(call, rxrpc_recvmsg_requeue, 0);
} else {
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_recvmsg);
}
error_no_call:
release_sock(&rx->sk);
diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
index cfe0badba0b3..76b1e2e89c1e 100644
--- a/net/rxrpc/sendmsg.c
+++ b/net/rxrpc/sendmsg.c
@@ -667,7 +667,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
case RXRPC_CALL_CLIENT_AWAIT_CONN:
case RXRPC_CALL_SERVER_PREALLOC:
case RXRPC_CALL_SERVER_SECURING:
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_sendmsg);
ret = -EBUSY;
goto error_release_sock;
default:
@@ -737,7 +737,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
if (!dropped_lock)
mutex_unlock(&call->user_mutex);
error_put:
- rxrpc_put_call(call, rxrpc_call_put);
+ rxrpc_put_call(call, rxrpc_call_put_sendmsg);
_leave(" = %d", ret);
return ret;



2022-11-23 10:49:51

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 10/13] rxrpc: trace: Don't use __builtin_return_address for rxrpc_conn tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the rxrpc_conn tracepoint

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 58 +++++++++++++++++++++++++++---------------
net/rxrpc/ar-internal.h | 21 +++++++++------
net/rxrpc/call_accept.c | 9 ++-----
net/rxrpc/call_object.c | 2 +
net/rxrpc/conn_client.c | 28 ++++++++++----------
net/rxrpc/conn_event.c | 4 +--
net/rxrpc/conn_object.c | 40 +++++++++++++++--------------
net/rxrpc/conn_service.c | 4 +--
net/rxrpc/input.c | 2 +
9 files changed, 92 insertions(+), 76 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index 1c74143a51c1..e09568a8c173 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -82,14 +82,34 @@
E_(rxrpc_peer_put_keepalive, "PUT keepaliv")

#define rxrpc_conn_traces \
- EM(rxrpc_conn_got, "GOT") \
- EM(rxrpc_conn_new_client, "NWc") \
- EM(rxrpc_conn_new_service, "NWs") \
- EM(rxrpc_conn_put_client, "PTc") \
- EM(rxrpc_conn_put_service, "PTs") \
- EM(rxrpc_conn_queued, "QUE") \
- EM(rxrpc_conn_reap_service, "RPs") \
- E_(rxrpc_conn_seen, "SEE")
+ EM(rxrpc_conn_free, "FREE ") \
+ EM(rxrpc_conn_get_activate_call, "GET act-call") \
+ EM(rxrpc_conn_get_call_input, "GET inp-call") \
+ EM(rxrpc_conn_get_conn_input, "GET inp-conn") \
+ EM(rxrpc_conn_get_idle, "GET idle ") \
+ EM(rxrpc_conn_get_poke, "GET poke ") \
+ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \
+ EM(rxrpc_conn_new_client, "NEW client ") \
+ EM(rxrpc_conn_new_service, "NEW service ") \
+ EM(rxrpc_conn_put_already_queued, "PUT alreadyq") \
+ EM(rxrpc_conn_put_call, "PUT call ") \
+ EM(rxrpc_conn_put_call_input, "PUT inp-call") \
+ EM(rxrpc_conn_put_conn_input, "PUT inp-conn") \
+ EM(rxrpc_conn_put_discard, "PUT discard ") \
+ EM(rxrpc_conn_put_discard_idle, "PUT disc-idl") \
+ EM(rxrpc_conn_put_local_dead, "PUT loc-dead") \
+ EM(rxrpc_conn_put_noreuse, "PUT noreuse ") \
+ EM(rxrpc_conn_put_poke, "PUT poke ") \
+ EM(rxrpc_conn_put_unbundle, "PUT unbundle") \
+ EM(rxrpc_conn_put_unidle, "PUT unidle ") \
+ EM(rxrpc_conn_put_work, "PUT work ") \
+ EM(rxrpc_conn_queue_challenge, "GQ chall ") \
+ EM(rxrpc_conn_queue_retry_work, "GQ retry-wk") \
+ EM(rxrpc_conn_queue_rx_work, "GQ rx-work ") \
+ EM(rxrpc_conn_queue_timer, "GQ timer ") \
+ EM(rxrpc_conn_see_new_service_conn, "SEE new-svc ") \
+ EM(rxrpc_conn_see_reap_service, "SEE reap-svc") \
+ E_(rxrpc_conn_see_work, "SEE work ")

#define rxrpc_client_traces \
EM(rxrpc_client_activate_chans, "Activa") \
@@ -430,30 +450,26 @@ TRACE_EVENT(rxrpc_peer,
);

TRACE_EVENT(rxrpc_conn,
- TP_PROTO(unsigned int conn_debug_id, enum rxrpc_conn_trace op,
- int usage, const void *where),
+ TP_PROTO(unsigned int conn_debug_id, int ref, enum rxrpc_conn_trace why),

- TP_ARGS(conn_debug_id, op, usage, where),
+ TP_ARGS(conn_debug_id, ref, why),

TP_STRUCT__entry(
__field(unsigned int, conn )
- __field(int, op )
- __field(int, usage )
- __field(const void *, where )
+ __field(int, ref )
+ __field(int, why )
),

TP_fast_assign(
__entry->conn = conn_debug_id;
- __entry->op = op;
- __entry->usage = usage;
- __entry->where = where;
+ __entry->ref = ref;
+ __entry->why = why;
),

- TP_printk("C=%08x %s u=%d sp=%pSR",
+ TP_printk("C=%08x %s r=%d",
__entry->conn,
- __print_symbolic(__entry->op, rxrpc_conn_traces),
- __entry->usage,
- __entry->where)
+ __print_symbolic(__entry->why, rxrpc_conn_traces),
+ __entry->ref)
);

TRACE_EVENT(rxrpc_client,
diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index 6cb111e9761c..bc8281c410c5 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -882,7 +882,7 @@ int rxrpc_connect_call(struct rxrpc_sock *, struct rxrpc_call *,
gfp_t);
void rxrpc_expose_client_call(struct rxrpc_call *);
void rxrpc_disconnect_client_call(struct rxrpc_bundle *, struct rxrpc_call *);
-void rxrpc_put_client_conn(struct rxrpc_connection *);
+void rxrpc_put_client_conn(struct rxrpc_connection *, enum rxrpc_conn_trace);
void rxrpc_discard_expired_client_conns(struct work_struct *);
void rxrpc_destroy_all_client_connections(struct rxrpc_net *);
void rxrpc_clean_up_local_conns(struct rxrpc_local *);
@@ -906,11 +906,13 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *,
void __rxrpc_disconnect_call(struct rxrpc_connection *, struct rxrpc_call *);
void rxrpc_disconnect_call(struct rxrpc_call *);
void rxrpc_kill_connection(struct rxrpc_connection *);
-bool rxrpc_queue_conn(struct rxrpc_connection *);
-void rxrpc_see_connection(struct rxrpc_connection *);
-struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *);
-struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *);
-void rxrpc_put_service_conn(struct rxrpc_connection *);
+bool rxrpc_queue_conn(struct rxrpc_connection *, enum rxrpc_conn_trace);
+void rxrpc_see_connection(struct rxrpc_connection *, enum rxrpc_conn_trace);
+struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *,
+ enum rxrpc_conn_trace);
+struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *,
+ enum rxrpc_conn_trace);
+void rxrpc_put_service_conn(struct rxrpc_connection *, enum rxrpc_conn_trace);
void rxrpc_service_connection_reaper(struct work_struct *);
void rxrpc_destroy_all_connections(struct rxrpc_net *);

@@ -924,15 +926,16 @@ static inline bool rxrpc_conn_is_service(const struct rxrpc_connection *conn)
return !rxrpc_conn_is_client(conn);
}

-static inline void rxrpc_put_connection(struct rxrpc_connection *conn)
+static inline void rxrpc_put_connection(struct rxrpc_connection *conn,
+ enum rxrpc_conn_trace why)
{
if (!conn)
return;

if (rxrpc_conn_is_client(conn))
- rxrpc_put_client_conn(conn);
+ rxrpc_put_client_conn(conn, why);
else
- rxrpc_put_service_conn(conn);
+ rxrpc_put_service_conn(conn, why);
}

static inline void rxrpc_reduce_conn_timer(struct rxrpc_connection *conn,
diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index f6bc3b07c3e5..04b52e28e0cc 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -91,9 +91,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
b->conn_backlog[head] = conn;
smp_store_release(&b->conn_backlog_head,
(head + 1) & (size - 1));
-
- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service,
- refcount_read(&conn->ref), here);
}

/* Now it gets complicated, because calls get registered with the
@@ -309,10 +306,10 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
(conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
conn->local = rxrpc_get_local(local, rxrpc_local_get_prealloc_conn);
conn->peer = peer;
- rxrpc_see_connection(conn);
+ rxrpc_see_connection(conn, rxrpc_conn_see_new_service_conn);
rxrpc_new_incoming_connection(rx, conn, sec, skb);
} else {
- rxrpc_get_connection(conn);
+ rxrpc_get_connection(conn, rxrpc_conn_get_service_conn);
}

/* And now we can allocate and set up a new call */
@@ -402,7 +399,7 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
case RXRPC_CONN_SERVICE_UNSECURED:
conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
- rxrpc_queue_conn(call->conn);
+ rxrpc_queue_conn(call->conn, rxrpc_conn_queue_challenge);
break;

case RXRPC_CONN_SERVICE:
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 1b725afd6e2c..29ec4013aa0b 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -635,7 +635,7 @@ static void rxrpc_destroy_call(struct work_struct *work)

rxrpc_delete_call_timer(call);

- rxrpc_put_connection(call->conn);
+ rxrpc_put_connection(call->conn, rxrpc_conn_put_call);
rxrpc_put_peer(call->peer, rxrpc_peer_put_call);
kmem_cache_free(rxrpc_call_jar, call);
if (atomic_dec_and_test(&rxnet->nr_calls))
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index 9444da235a48..dcfec6a45255 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -211,9 +211,8 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
rxrpc_get_local(conn->local, rxrpc_local_get_client_conn);
key_get(conn->key);

- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client,
- refcount_read(&conn->ref),
- __builtin_return_address(0));
+ trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref),
+ rxrpc_conn_new_client);

atomic_inc(&rxnet->nr_client_conns);
trace_rxrpc_client(conn, -1, rxrpc_client_alloc);
@@ -467,10 +466,10 @@ static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp)
if (candidate) {
_debug("discard C=%x", candidate->debug_id);
trace_rxrpc_client(candidate, -1, rxrpc_client_duplicate);
- rxrpc_put_connection(candidate);
+ rxrpc_put_connection(candidate, rxrpc_conn_put_discard);
}

- rxrpc_put_connection(old);
+ rxrpc_put_connection(old, rxrpc_conn_put_noreuse);
_leave("");
}

@@ -544,7 +543,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,
rxrpc_see_call(call);
list_del_init(&call->chan_wait_link);
call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call);
- call->conn = rxrpc_get_connection(conn);
+ call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call);
call->cid = conn->proto.cid | channel;
call->call_id = call_id;
call->security = conn->security;
@@ -592,7 +591,7 @@ static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connecti
}
spin_unlock(&rxnet->client_conn_cache_lock);
if (drop_ref)
- rxrpc_put_connection(conn);
+ rxrpc_put_connection(conn, rxrpc_conn_put_unidle);
}
}

@@ -896,7 +895,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call
trace_rxrpc_client(conn, channel, rxrpc_client_to_idle);
conn->idle_timestamp = jiffies;

- rxrpc_get_connection(conn);
+ rxrpc_get_connection(conn, rxrpc_conn_get_idle);
spin_lock(&rxnet->client_conn_cache_lock);
list_move_tail(&conn->cache_link, &rxnet->idle_client_conns);
spin_unlock(&rxnet->client_conn_cache_lock);
@@ -938,7 +937,7 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)

if (need_drop) {
rxrpc_deactivate_bundle(bundle);
- rxrpc_put_connection(conn);
+ rxrpc_put_connection(conn, rxrpc_conn_put_unbundle);
}
}

@@ -983,15 +982,15 @@ static void rxrpc_kill_client_conn(struct rxrpc_connection *conn)
/*
* Clean up a dead client connections.
*/
-void rxrpc_put_client_conn(struct rxrpc_connection *conn)
+void rxrpc_put_client_conn(struct rxrpc_connection *conn,
+ enum rxrpc_conn_trace why)
{
- const void *here = __builtin_return_address(0);
unsigned int debug_id = conn->debug_id;
bool dead;
int r;

dead = __refcount_dec_and_test(&conn->ref, &r);
- trace_rxrpc_conn(debug_id, rxrpc_conn_put_client, r - 1, here);
+ trace_rxrpc_conn(debug_id, r - 1, why);
if (dead)
rxrpc_kill_client_conn(conn);
}
@@ -1063,7 +1062,8 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work)
spin_unlock(&rxnet->client_conn_cache_lock);

rxrpc_unbundle_conn(conn);
- rxrpc_put_connection(conn); /* Drop the ->cache_link ref */
+ /* Drop the ->cache_link ref */
+ rxrpc_put_connection(conn, rxrpc_conn_put_discard_idle);

nr_conns--;
goto next;
@@ -1134,7 +1134,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local)
struct rxrpc_connection, cache_link);
list_del_init(&conn->cache_link);
rxrpc_unbundle_conn(conn);
- rxrpc_put_connection(conn);
+ rxrpc_put_connection(conn, rxrpc_conn_put_local_dead);
}

_leave(" [culled]");
diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index 225edaf019f1..817f895c77ca 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -472,14 +472,14 @@ void rxrpc_process_connection(struct work_struct *work)
struct rxrpc_connection *conn =
container_of(work, struct rxrpc_connection, processor);

- rxrpc_see_connection(conn);
+ rxrpc_see_connection(conn, rxrpc_conn_see_work);

if (__rxrpc_use_local(conn->local, rxrpc_local_use_conn_work)) {
rxrpc_do_process_connection(conn);
rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work);
}

- rxrpc_put_connection(conn);
+ rxrpc_put_connection(conn, rxrpc_conn_put_work);
_leave("");
return;
}
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index 554ee5dd3325..bbace8d9953d 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -26,7 +26,7 @@ static void rxrpc_connection_timer(struct timer_list *timer)
struct rxrpc_connection *conn =
container_of(timer, struct rxrpc_connection, timer);

- rxrpc_queue_conn(conn);
+ rxrpc_queue_conn(conn, rxrpc_conn_queue_timer);
}

/*
@@ -260,43 +260,42 @@ void rxrpc_kill_connection(struct rxrpc_connection *conn)
* Queue a connection's work processor, getting a ref to pass to the work
* queue.
*/
-bool rxrpc_queue_conn(struct rxrpc_connection *conn)
+bool rxrpc_queue_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why)
{
- const void *here = __builtin_return_address(0);
int r;

if (!__refcount_inc_not_zero(&conn->ref, &r))
return false;
if (rxrpc_queue_work(&conn->processor))
- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_queued, r + 1, here);
+ trace_rxrpc_conn(conn->debug_id, why, r + 1);
else
- rxrpc_put_connection(conn);
+ rxrpc_put_connection(conn, rxrpc_conn_put_already_queued);
return true;
}

/*
* Note the re-emergence of a connection.
*/
-void rxrpc_see_connection(struct rxrpc_connection *conn)
+void rxrpc_see_connection(struct rxrpc_connection *conn,
+ enum rxrpc_conn_trace why)
{
- const void *here = __builtin_return_address(0);
if (conn) {
- int n = refcount_read(&conn->ref);
+ int r = refcount_read(&conn->ref);

- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_seen, n, here);
+ trace_rxrpc_conn(conn->debug_id, r, why);
}
}

/*
* Get a ref on a connection.
*/
-struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn)
+struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn,
+ enum rxrpc_conn_trace why)
{
- const void *here = __builtin_return_address(0);
int r;

__refcount_inc(&conn->ref, &r);
- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r, here);
+ trace_rxrpc_conn(conn->debug_id, r + 1, why);
return conn;
}

@@ -304,14 +303,14 @@ struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn)
* Try to get a ref on a connection.
*/
struct rxrpc_connection *
-rxrpc_get_connection_maybe(struct rxrpc_connection *conn)
+rxrpc_get_connection_maybe(struct rxrpc_connection *conn,
+ enum rxrpc_conn_trace why)
{
- const void *here = __builtin_return_address(0);
int r;

if (conn) {
if (__refcount_inc_not_zero(&conn->ref, &r))
- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r + 1, here);
+ trace_rxrpc_conn(conn->debug_id, r + 1, why);
else
conn = NULL;
}
@@ -331,14 +330,14 @@ static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet,
/*
* Release a service connection
*/
-void rxrpc_put_service_conn(struct rxrpc_connection *conn)
+void rxrpc_put_service_conn(struct rxrpc_connection *conn,
+ enum rxrpc_conn_trace why)
{
- const void *here = __builtin_return_address(0);
unsigned int debug_id = conn->debug_id;
int r;

__refcount_dec(&conn->ref, &r);
- trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, r - 1, here);
+ trace_rxrpc_conn(debug_id, r - 1, why);
if (r - 1 == 1)
rxrpc_set_service_reap_timer(conn->local->rxnet,
jiffies + rxrpc_connection_expiry);
@@ -354,6 +353,9 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)

_enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref));

+ trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref),
+ rxrpc_conn_free);
+
ASSERTCMP(refcount_read(&conn->ref), ==, 0);

del_timer_sync(&conn->timer);
@@ -419,7 +421,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
*/
if (!refcount_dec_if_one(&conn->ref))
continue;
- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_reap_service, 0, NULL);
+ rxrpc_see_connection(conn, rxrpc_conn_see_reap_service);

if (rxrpc_conn_is_client(conn))
BUG();
diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
index a3b91864ef21..bf087213bd4d 100644
--- a/net/rxrpc/conn_service.c
+++ b/net/rxrpc/conn_service.c
@@ -141,9 +141,7 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn
list_add_tail(&conn->proc_link, &rxnet->conn_proc_list);
write_unlock(&rxnet->conn_lock);

- trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service,
- refcount_read(&conn->ref),
- __builtin_return_address(0));
+ rxrpc_see_connection(conn, rxrpc_conn_new_service);
}

return conn;
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index cecfd201d832..c8ff7489b412 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -1121,7 +1121,7 @@ static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn,
_enter("%p,%p", conn, skb);

skb_queue_tail(&conn->rx_queue, skb);
- rxrpc_queue_conn(conn);
+ rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work);
}

/*


2022-11-23 10:52:52

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 02/13] rxrpc: Remove decl for rxrpc_kernel_call_is_complete()

rxrpc_kernel_call_is_complete() has been removed, so remove its declaration
too.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/net/af_rxrpc.h | 1 -
1 file changed, 1 deletion(-)

diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
index dc033f08191e..d5a5ae926380 100644
--- a/include/net/af_rxrpc.h
+++ b/include/net/af_rxrpc.h
@@ -66,7 +66,6 @@ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64);
bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *);
u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *);
-bool rxrpc_kernel_call_is_complete(struct rxrpc_call *);
void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *,
unsigned long);



2022-11-23 10:57:06

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 05/13] rxrpc: Remove the [_k]net() debugging macros

Remove the _net() and knet() debugging macros in favour of tracepoints.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

net/rxrpc/ar-internal.h | 10 ----------
net/rxrpc/call_object.c | 6 ------
net/rxrpc/conn_client.c | 2 --
net/rxrpc/conn_object.c | 2 --
net/rxrpc/conn_service.c | 2 --
net/rxrpc/input.c | 1 -
net/rxrpc/local_object.c | 8 --------
net/rxrpc/peer_event.c | 48 ++--------------------------------------------
net/rxrpc/peer_object.c | 6 +-----
9 files changed, 3 insertions(+), 82 deletions(-)

diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index a3a29390e12b..36ece1efb1d4 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -1190,20 +1190,17 @@ extern unsigned int rxrpc_debug;
#define kenter(FMT,...) dbgprintk("==> %s("FMT")",__func__ ,##__VA_ARGS__)
#define kleave(FMT,...) dbgprintk("<== %s()"FMT"",__func__ ,##__VA_ARGS__)
#define kdebug(FMT,...) dbgprintk(" "FMT ,##__VA_ARGS__)
-#define knet(FMT,...) dbgprintk("@@@ "FMT ,##__VA_ARGS__)


#if defined(__KDEBUG)
#define _enter(FMT,...) kenter(FMT,##__VA_ARGS__)
#define _leave(FMT,...) kleave(FMT,##__VA_ARGS__)
#define _debug(FMT,...) kdebug(FMT,##__VA_ARGS__)
-#define _net(FMT,...) knet(FMT,##__VA_ARGS__)

#elif defined(CONFIG_AF_RXRPC_DEBUG)
#define RXRPC_DEBUG_KENTER 0x01
#define RXRPC_DEBUG_KLEAVE 0x02
#define RXRPC_DEBUG_KDEBUG 0x04
-#define RXRPC_DEBUG_KNET 0x10

#define _enter(FMT,...) \
do { \
@@ -1223,17 +1220,10 @@ do { \
kdebug(FMT,##__VA_ARGS__); \
} while (0)

-#define _net(FMT,...) \
-do { \
- if (unlikely(rxrpc_debug & RXRPC_DEBUG_KNET)) \
- knet(FMT,##__VA_ARGS__); \
-} while (0)
-
#else
#define _enter(FMT,...) no_printk("==> %s("FMT")",__func__ ,##__VA_ARGS__)
#define _leave(FMT,...) no_printk("<== %s()"FMT"",__func__ ,##__VA_ARGS__)
#define _debug(FMT,...) no_printk(" "FMT ,##__VA_ARGS__)
-#define _net(FMT,...) no_printk("@@@ "FMT ,##__VA_ARGS__)
#endif

/*
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index 1befe22cd301..e36a317b2e9a 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -349,8 +349,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,

rxrpc_start_call_timer(call);

- _net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id);
-
_leave(" = %p [new]", call);
return call;

@@ -423,8 +421,6 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets);
spin_unlock(&conn->params.peer->lock);

- _net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
-
rxrpc_start_call_timer(call);
_leave("");
}
@@ -669,8 +665,6 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
{
struct rxrpc_txbuf *txb;

- _net("DESTROY CALL %d", call->debug_id);
-
memset(&call->sock_node, 0xcd, sizeof(call->sock_node));

ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
index f11c97e28d2a..2b76fbffd4dd 100644
--- a/net/rxrpc/conn_client.c
+++ b/net/rxrpc/conn_client.c
@@ -541,8 +541,6 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,
call->service_id = conn->service_id;

trace_rxrpc_connect_call(call);
- _net("CONNECT call %08x:%08x as call %d on conn %d",
- call->cid, call->call_id, call->debug_id, conn->debug_id);

write_lock_bh(&call->state_lock);
call->state = RXRPC_CALL_CLIENT_SEND_REQUEST;
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index 156bd26daf74..d5d15389406f 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -356,8 +356,6 @@ static void rxrpc_destroy_connection(struct rcu_head *rcu)

ASSERTCMP(refcount_read(&conn->ref), ==, 0);

- _net("DESTROY CONN %d", conn->debug_id);
-
del_timer_sync(&conn->timer);
rxrpc_purge_queue(&conn->rx_queue);

diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
index 6e6aa02c6f9e..75f903099eb0 100644
--- a/net/rxrpc/conn_service.c
+++ b/net/rxrpc/conn_service.c
@@ -184,8 +184,6 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,

/* Make the connection a target for incoming packets. */
rxrpc_publish_service_conn(conn->params.peer, conn);
-
- _net("CONNECTION new %d {%x}", conn->debug_id, conn->proto.cid);
}

/*
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index 646ee61af40e..e2461f29d765 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -725,7 +725,6 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb,
peer->maxdata = mtu;
peer->mtu = mtu + peer->hdrsize;
spin_unlock_bh(&peer->lock);
- _net("Net MTU %u (maxdata %u)", peer->mtu, peer->maxdata);
}

if (wake)
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index a943fdf91e24..11080c335d42 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -198,7 +198,6 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
struct rxrpc_local *local;
struct rxrpc_net *rxnet = rxrpc_net(net);
struct hlist_node *cursor;
- const char *age;
long diff;
int ret;

@@ -232,7 +231,6 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
if (!rxrpc_use_local(local))
break;

- age = "old";
goto found;
}

@@ -250,14 +248,9 @@ struct rxrpc_local *rxrpc_lookup_local(struct net *net,
} else {
hlist_add_head_rcu(&local->link, &rxnet->local_endpoints);
}
- age = "new";

found:
mutex_unlock(&rxnet->local_mutex);
-
- _net("LOCAL %s %d {%pISp}",
- age, local->debug_id, &local->srx.transport);
-
_leave(" = %p", local);
return local;

@@ -467,7 +460,6 @@ static void rxrpc_local_rcu(struct rcu_head *rcu)

ASSERT(!work_pending(&local->processor));

- _net("DESTROY LOCAL %d", local->debug_id);
kfree(local);
_leave("");
}
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index be781c156e89..ad4d1769e02b 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -48,13 +48,11 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local,
srx->transport.sin.sin_port = serr->port;
switch (serr->ee.ee_origin) {
case SO_EE_ORIGIN_ICMP:
- _net("Rx ICMP");
memcpy(&srx->transport.sin.sin_addr,
skb_network_header(skb) + serr->addr_offset,
sizeof(struct in_addr));
break;
case SO_EE_ORIGIN_ICMP6:
- _net("Rx ICMP6 on v4 sock");
memcpy(&srx->transport.sin.sin_addr,
skb_network_header(skb) + serr->addr_offset + 12,
sizeof(struct in_addr));
@@ -70,14 +68,12 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local,
case AF_INET6:
switch (serr->ee.ee_origin) {
case SO_EE_ORIGIN_ICMP6:
- _net("Rx ICMP6");
srx->transport.sin6.sin6_port = serr->port;
memcpy(&srx->transport.sin6.sin6_addr,
skb_network_header(skb) + serr->addr_offset,
sizeof(struct in6_addr));
break;
case SO_EE_ORIGIN_ICMP:
- _net("Rx ICMP on v6 sock");
srx->transport_len = sizeof(srx->transport.sin);
srx->transport.family = AF_INET;
srx->transport.sin.sin_port = serr->port;
@@ -106,13 +102,9 @@ static struct rxrpc_peer *rxrpc_lookup_peer_local_rcu(struct rxrpc_local *local,
*/
static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu)
{
- _net("Rx ICMP Fragmentation Needed (%d)", mtu);
-
/* wind down the local interface MTU */
- if (mtu > 0 && peer->if_mtu == 65535 && mtu < peer->if_mtu) {
+ if (mtu > 0 && peer->if_mtu == 65535 && mtu < peer->if_mtu)
peer->if_mtu = mtu;
- _net("I/F MTU %u", mtu);
- }

if (mtu == 0) {
/* they didn't give us a size, estimate one */
@@ -133,8 +125,6 @@ static void rxrpc_adjust_mtu(struct rxrpc_peer *peer, unsigned int mtu)
peer->mtu = mtu;
peer->maxdata = peer->mtu - peer->hdrsize;
spin_unlock_bh(&peer->lock);
- _net("Net MTU %u (maxdata %u)",
- peer->mtu, peer->maxdata);
}
}

@@ -222,41 +212,6 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
err = ee->ee_errno;

switch (ee->ee_origin) {
- case SO_EE_ORIGIN_ICMP:
- switch (ee->ee_type) {
- case ICMP_DEST_UNREACH:
- switch (ee->ee_code) {
- case ICMP_NET_UNREACH:
- _net("Rx Received ICMP Network Unreachable");
- break;
- case ICMP_HOST_UNREACH:
- _net("Rx Received ICMP Host Unreachable");
- break;
- case ICMP_PORT_UNREACH:
- _net("Rx Received ICMP Port Unreachable");
- break;
- case ICMP_NET_UNKNOWN:
- _net("Rx Received ICMP Unknown Network");
- break;
- case ICMP_HOST_UNKNOWN:
- _net("Rx Received ICMP Unknown Host");
- break;
- default:
- _net("Rx Received ICMP DestUnreach code=%u",
- ee->ee_code);
- break;
- }
- break;
-
- case ICMP_TIME_EXCEEDED:
- _net("Rx Received ICMP TTL Exceeded");
- break;
-
- default:
- break;
- }
- break;
-
case SO_EE_ORIGIN_NONE:
case SO_EE_ORIGIN_LOCAL:
compl = RXRPC_CALL_LOCAL_ERROR;
@@ -266,6 +221,7 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
if (err == EACCES)
err = EHOSTUNREACH;
fallthrough;
+ case SO_EE_ORIGIN_ICMP:
default:
break;
}
diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
index 041a51225c5f..b3c3c1c344fc 100644
--- a/net/rxrpc/peer_object.c
+++ b/net/rxrpc/peer_object.c
@@ -138,10 +138,8 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *local,
unsigned long hash_key = rxrpc_peer_hash_key(local, srx);

peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
- if (peer) {
- _net("PEER %d {%pISp}", peer->debug_id, &peer->srx.transport);
+ if (peer)
_leave(" = %p {u=%d}", peer, refcount_read(&peer->ref));
- }
return peer;
}

@@ -371,8 +369,6 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
peer = candidate;
}

- _net("PEER %d {%pISp}", peer->debug_id, &peer->srx.transport);
-
_leave(" = %p {u=%d}", peer, refcount_read(&peer->ref));
return peer;
}


2022-11-23 11:09:45

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 03/13] rxrpc: Remove handling of duplicate packets in recvmsg_queue

We should not now see duplicate packets in the recvmsg_queue. At one
point, jumbo packets that overlapped with already queued data would be
added to the queue and dealt with in recvmsg rather than in the softirq
input code, but now jumbo packets are split/cloned before being processed
by the input code and the subpackets can be discarded individually.

So remove the recvmsg-side code for handling this.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

net/rxrpc/recvmsg.c | 18 ------------------
1 file changed, 18 deletions(-)

diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index efb85f983657..134122f5961a 100644
--- a/net/rxrpc/recvmsg.c
+++ b/net/rxrpc/recvmsg.c
@@ -228,7 +228,6 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)

_enter("%d", call->debug_id);

-further_rotation:
skb = skb_dequeue(&call->recvmsg_queue);
rxrpc_see_skb(skb, rxrpc_skb_rotated);

@@ -250,17 +249,6 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
return;
}

- /* The next packet on the queue might entirely overlap with the one we
- * just consumed; if so, rotate that away also.
- */
- skb = skb_peek(&call->recvmsg_queue);
- if (skb) {
- sp = rxrpc_skb(skb);
- if (sp->hdr.seq != call->rx_consumed &&
- after_eq(call->rx_consumed, sp->hdr.seq))
- goto further_rotation;
- }
-
/* Check to see if there's an ACK that needs sending. */
acked = atomic_add_return(call->rx_consumed - old_consumed,
&call->ackr_nr_consumed);
@@ -318,11 +306,6 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
sp = rxrpc_skb(skb);
seq = sp->hdr.seq;

- if (after_eq(call->rx_consumed, seq)) {
- kdebug("obsolete %x %x", call->rx_consumed, seq);
- goto skip_obsolete;
- }
-
if (!(flags & MSG_PEEK))
trace_rxrpc_receive(call, rxrpc_receive_front,
sp->hdr.serial, seq);
@@ -373,7 +356,6 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
break;
}

- skip_obsolete:
/* The whole packet has been transferred. */
if (sp->hdr.flags & RXRPC_LAST_PACKET)
ret = 1;


2022-11-23 11:10:11

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 04/13] rxrpc: Remove the [k_]proto() debugging macros

Remove the kproto() and _proto() debugging macros in preference to using
tracepoints for this.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 60 ++++++++++++++++++++++++++++++++++++++++++
net/rxrpc/ar-internal.h | 10 -------
net/rxrpc/conn_event.c | 4 ---
net/rxrpc/input.c | 17 ------------
net/rxrpc/local_event.c | 3 --
net/rxrpc/output.c | 2 -
net/rxrpc/peer_event.c | 4 ---
net/rxrpc/rxkad.c | 9 ++----
8 files changed, 63 insertions(+), 46 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index b9886d1df825..2b77f9a75bf7 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -733,6 +733,66 @@ TRACE_EVENT(rxrpc_rx_abort,
__entry->abort_code)
);

+TRACE_EVENT(rxrpc_rx_challenge,
+ TP_PROTO(struct rxrpc_connection *conn, rxrpc_serial_t serial,
+ u32 version, u32 nonce, u32 min_level),
+
+ TP_ARGS(conn, serial, version, nonce, min_level),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, conn )
+ __field(rxrpc_serial_t, serial )
+ __field(u32, version )
+ __field(u32, nonce )
+ __field(u32, min_level )
+ ),
+
+ TP_fast_assign(
+ __entry->conn = conn->debug_id;
+ __entry->serial = serial;
+ __entry->version = version;
+ __entry->nonce = nonce;
+ __entry->min_level = min_level;
+ ),
+
+ TP_printk("C=%08x CHALLENGE %08x v=%x n=%x ml=%x",
+ __entry->conn,
+ __entry->serial,
+ __entry->version,
+ __entry->nonce,
+ __entry->min_level)
+ );
+
+TRACE_EVENT(rxrpc_rx_response,
+ TP_PROTO(struct rxrpc_connection *conn, rxrpc_serial_t serial,
+ u32 version, u32 kvno, u32 ticket_len),
+
+ TP_ARGS(conn, serial, version, kvno, ticket_len),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, conn )
+ __field(rxrpc_serial_t, serial )
+ __field(u32, version )
+ __field(u32, kvno )
+ __field(u32, ticket_len )
+ ),
+
+ TP_fast_assign(
+ __entry->conn = conn->debug_id;
+ __entry->serial = serial;
+ __entry->version = version;
+ __entry->kvno = kvno;
+ __entry->ticket_len = ticket_len;
+ ),
+
+ TP_printk("C=%08x RESPONSE %08x v=%x kvno=%x tl=%x",
+ __entry->conn,
+ __entry->serial,
+ __entry->version,
+ __entry->kvno,
+ __entry->ticket_len)
+ );
+
TRACE_EVENT(rxrpc_rx_rwind_change,
TP_PROTO(struct rxrpc_call *call, rxrpc_serial_t serial,
u32 rwind, bool wake),
diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
index f5c538ce3e23..a3a29390e12b 100644
--- a/net/rxrpc/ar-internal.h
+++ b/net/rxrpc/ar-internal.h
@@ -1190,7 +1190,6 @@ extern unsigned int rxrpc_debug;
#define kenter(FMT,...) dbgprintk("==> %s("FMT")",__func__ ,##__VA_ARGS__)
#define kleave(FMT,...) dbgprintk("<== %s()"FMT"",__func__ ,##__VA_ARGS__)
#define kdebug(FMT,...) dbgprintk(" "FMT ,##__VA_ARGS__)
-#define kproto(FMT,...) dbgprintk("### "FMT ,##__VA_ARGS__)
#define knet(FMT,...) dbgprintk("@@@ "FMT ,##__VA_ARGS__)


@@ -1198,14 +1197,12 @@ extern unsigned int rxrpc_debug;
#define _enter(FMT,...) kenter(FMT,##__VA_ARGS__)
#define _leave(FMT,...) kleave(FMT,##__VA_ARGS__)
#define _debug(FMT,...) kdebug(FMT,##__VA_ARGS__)
-#define _proto(FMT,...) kproto(FMT,##__VA_ARGS__)
#define _net(FMT,...) knet(FMT,##__VA_ARGS__)

#elif defined(CONFIG_AF_RXRPC_DEBUG)
#define RXRPC_DEBUG_KENTER 0x01
#define RXRPC_DEBUG_KLEAVE 0x02
#define RXRPC_DEBUG_KDEBUG 0x04
-#define RXRPC_DEBUG_KPROTO 0x08
#define RXRPC_DEBUG_KNET 0x10

#define _enter(FMT,...) \
@@ -1226,12 +1223,6 @@ do { \
kdebug(FMT,##__VA_ARGS__); \
} while (0)

-#define _proto(FMT,...) \
-do { \
- if (unlikely(rxrpc_debug & RXRPC_DEBUG_KPROTO)) \
- kproto(FMT,##__VA_ARGS__); \
-} while (0)
-
#define _net(FMT,...) \
do { \
if (unlikely(rxrpc_debug & RXRPC_DEBUG_KNET)) \
@@ -1242,7 +1233,6 @@ do { \
#define _enter(FMT,...) no_printk("==> %s("FMT")",__func__ ,##__VA_ARGS__)
#define _leave(FMT,...) no_printk("<== %s()"FMT"",__func__ ,##__VA_ARGS__)
#define _debug(FMT,...) no_printk(" "FMT ,##__VA_ARGS__)
-#define _proto(FMT,...) no_printk("### "FMT ,##__VA_ARGS__)
#define _net(FMT,...) no_printk("@@@ "FMT ,##__VA_ARGS__)
#endif

diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index aab069701398..d5549cbfc71b 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -122,14 +122,12 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,

switch (chan->last_type) {
case RXRPC_PACKET_TYPE_ABORT:
- _proto("Tx ABORT %%%u { %d } [re]", serial, conn->abort_code);
break;
case RXRPC_PACKET_TYPE_ACK:
trace_rxrpc_tx_ack(chan->call_debug_id, serial,
ntohl(pkt.ack.firstPacket),
ntohl(pkt.ack.serial),
pkt.ack.reason, 0);
- _proto("Tx ACK %%%u [re]", serial);
break;
}

@@ -242,7 +240,6 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
serial = atomic_inc_return(&conn->serial);
rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial);
whdr.serial = htonl(serial);
- _proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code);

ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
if (ret < 0) {
@@ -315,7 +312,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return -EPROTO;
}
abort_code = ntohl(wtmp);
- _proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);

conn->error = -ECONNABORTED;
conn->abort_code = abort_code;
diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index bdf70b81addc..646ee61af40e 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -551,9 +551,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
atomic64_read(&call->ackr_window), call->rx_highest_seq,
skb->len, seq0);

- _proto("Rx DATA %%%u { #%u f=%02x }",
- sp->hdr.serial, seq0, sp->hdr.flags);
-
state = READ_ONCE(call->state);
if (state >= RXRPC_CALL_COMPLETE) {
rxrpc_free_skb(skb, rxrpc_skb_freed);
@@ -708,11 +705,6 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb,
bool wake = false;
u32 rwind = ntohl(ackinfo->rwind);

- _proto("Rx ACK %%%u Info { rx=%u max=%u rwin=%u jm=%u }",
- sp->hdr.serial,
- ntohl(ackinfo->rxMTU), ntohl(ackinfo->maxMTU),
- rwind, ntohl(ackinfo->jumbo_max));
-
if (rwind > RXRPC_TX_MAX_WINDOW)
rwind = RXRPC_TX_MAX_WINDOW;
if (call->tx_winsize != rwind) {
@@ -855,7 +847,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
}

if (ack.reason == RXRPC_ACK_PING) {
- _proto("Rx ACK %%%u PING Request", ack_serial);
rxrpc_send_ACK(call, RXRPC_ACK_PING_RESPONSE, ack_serial,
rxrpc_propose_ack_respond_to_ping);
} else if (sp->hdr.flags & RXRPC_REQUEST_ACK) {
@@ -1014,9 +1005,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb)
{
struct rxrpc_ack_summary summary = { 0 };
- struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
-
- _proto("Rx ACKALL %%%u", sp->hdr.serial);

spin_lock(&call->input_lock);

@@ -1044,8 +1032,6 @@ static void rxrpc_input_abort(struct rxrpc_call *call, struct sk_buff *skb)

trace_rxrpc_rx_abort(call, sp->hdr.serial, abort_code);

- _proto("Rx ABORT %%%u { %x }", sp->hdr.serial, abort_code);
-
rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
abort_code, -ECONNABORTED);
}
@@ -1081,8 +1067,6 @@ static void rxrpc_input_call_packet(struct rxrpc_call *call,
goto no_free;

case RXRPC_PACKET_TYPE_BUSY:
- _proto("Rx BUSY %%%u", sp->hdr.serial);
-
/* Just ignore BUSY packets from the server; the retry and
* lifespan timers will take care of business. BUSY packets
* from the client don't make sense.
@@ -1325,7 +1309,6 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
goto discard;

default:
- _proto("Rx Bad Packet Type %u", sp->hdr.type);
goto bad_message;
}

diff --git a/net/rxrpc/local_event.c b/net/rxrpc/local_event.c
index 19e929c7c38b..f23a3fbabbda 100644
--- a/net/rxrpc/local_event.c
+++ b/net/rxrpc/local_event.c
@@ -63,8 +63,6 @@ static void rxrpc_send_version_request(struct rxrpc_local *local,

len = iov[0].iov_len + iov[1].iov_len;

- _proto("Tx VERSION (reply)");
-
ret = kernel_sendmsg(local->socket, &msg, iov, 2, len);
if (ret < 0)
trace_rxrpc_tx_fail(local->debug_id, 0, ret,
@@ -98,7 +96,6 @@ void rxrpc_process_local_events(struct rxrpc_local *local)
if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
&v, 1) < 0)
return;
- _proto("Rx VERSION { %02x }", v);
if (v == 0)
rxrpc_send_version_request(local, &sp->hdr, skb);
break;
diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
index c5eed0e83e47..635acf3dbd77 100644
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -701,8 +701,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *peer)

len = iov[0].iov_len + iov[1].iov_len;

- _proto("Tx VERSION (keepalive)");
-
iov_iter_kvec(&msg.msg_iter, WRITE, iov, 2, len);
ret = do_udp_sendmsg(peer->local->socket, &msg, len);
if (ret < 0)
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index cda3890657a9..be781c156e89 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -253,15 +253,12 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
break;

default:
- _proto("Rx Received ICMP error { type=%u code=%u }",
- ee->ee_type, ee->ee_code);
break;
}
break;

case SO_EE_ORIGIN_NONE:
case SO_EE_ORIGIN_LOCAL:
- _proto("Rx Received local error { error=%d }", err);
compl = RXRPC_CALL_LOCAL_ERROR;
break;

@@ -270,7 +267,6 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
err = EHOSTUNREACH;
fallthrough;
default:
- _proto("Rx Received error report { orig=%u }", ee->ee_origin);
break;
}

diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
index 110a5550c0a6..36cf40442a7e 100644
--- a/net/rxrpc/rxkad.c
+++ b/net/rxrpc/rxkad.c
@@ -704,7 +704,6 @@ static int rxkad_issue_challenge(struct rxrpc_connection *conn)

serial = atomic_inc_return(&conn->serial);
whdr.serial = htonl(serial);
- _proto("Tx CHALLENGE %%%u", serial);

ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
if (ret < 0) {
@@ -762,7 +761,6 @@ static int rxkad_send_response(struct rxrpc_connection *conn,

serial = atomic_inc_return(&conn->serial);
whdr.serial = htonl(serial);
- _proto("Tx RESPONSE %%%u", serial);

ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 3, len);
if (ret < 0) {
@@ -856,8 +854,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
nonce = ntohl(challenge.nonce);
min_level = ntohl(challenge.min_level);

- _proto("Rx CHALLENGE %%%u { v=%u n=%u ml=%u }",
- sp->hdr.serial, version, nonce, min_level);
+ trace_rxrpc_rx_challenge(conn, sp->hdr.serial, version, nonce, min_level);

eproto = tracepoint_string("chall_ver");
abort_code = RXKADINCONSISTENCY;
@@ -1139,8 +1136,8 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
version = ntohl(response->version);
ticket_len = ntohl(response->ticket_len);
kvno = ntohl(response->kvno);
- _proto("Rx RESPONSE %%%u { v=%u kv=%u tl=%u }",
- sp->hdr.serial, version, kvno, ticket_len);
+
+ trace_rxrpc_rx_response(conn, sp->hdr.serial, version, kvno, ticket_len);

eproto = tracepoint_string("rxkad_rsp_ver");
abort_code = RXKADINCONSISTENCY;


2022-11-23 11:13:00

by David Howells

[permalink] [raw]
Subject: [PATCH net-next 13/13] rxrpc: trace: Don't use __builtin_return_address for sk_buff tracing

In rxrpc tracing, use enums to generate lists of points of interest rather
than __builtin_return_address() for the sk_buff tracepoint.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: [email protected]
---

include/trace/events/rxrpc.h | 57 ++++++++++++++++++++++++------------------
net/rxrpc/call_event.c | 4 +--
net/rxrpc/call_object.c | 2 +
net/rxrpc/conn_event.c | 6 ++--
net/rxrpc/input.c | 36 +++++++++++++--------------
net/rxrpc/local_event.c | 4 +--
net/rxrpc/output.c | 6 ++--
net/rxrpc/peer_event.c | 8 +++---
net/rxrpc/recvmsg.c | 6 ++--
net/rxrpc/skbuff.c | 36 +++++++++++----------------
10 files changed, 84 insertions(+), 81 deletions(-)

diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
index 8af077c2927a..b54af1920d0d 100644
--- a/include/trace/events/rxrpc.h
+++ b/include/trace/events/rxrpc.h
@@ -17,19 +17,31 @@
* Declare tracing information enums and their string mappings for display.
*/
#define rxrpc_skb_traces \
- EM(rxrpc_skb_ack, "ACK") \
- EM(rxrpc_skb_cleaned, "CLN") \
- EM(rxrpc_skb_cloned_jumbo, "CLJ") \
- EM(rxrpc_skb_freed, "FRE") \
- EM(rxrpc_skb_got, "GOT") \
- EM(rxrpc_skb_lost, "*L*") \
- EM(rxrpc_skb_new, "NEW") \
- EM(rxrpc_skb_purged, "PUR") \
- EM(rxrpc_skb_received, "RCV") \
- EM(rxrpc_skb_rotated, "ROT") \
- EM(rxrpc_skb_seen, "SEE") \
- EM(rxrpc_skb_unshared, "UNS") \
- E_(rxrpc_skb_unshared_nomem, "US0")
+ EM(rxrpc_skb_eaten_by_unshare, "ETN unshare ") \
+ EM(rxrpc_skb_eaten_by_unshare_nomem, "ETN unshar-nm") \
+ EM(rxrpc_skb_get_ack, "GET ack ") \
+ EM(rxrpc_skb_get_conn_work, "GET conn-work") \
+ EM(rxrpc_skb_get_to_recvmsg, "GET to-recv ") \
+ EM(rxrpc_skb_get_to_recvmsg_oos, "GET to-recv-o") \
+ EM(rxrpc_skb_new_encap_rcv, "NEW encap-rcv") \
+ EM(rxrpc_skb_new_error_report, "NEW error-rpt") \
+ EM(rxrpc_skb_new_jumbo_subpacket, "NEW jumbo-sub") \
+ EM(rxrpc_skb_new_unshared, "NEW unshared ") \
+ EM(rxrpc_skb_put_ack, "PUT ack ") \
+ EM(rxrpc_skb_put_conn_work, "PUT conn-work") \
+ EM(rxrpc_skb_put_error_report, "PUT error-rep") \
+ EM(rxrpc_skb_put_input, "PUT input ") \
+ EM(rxrpc_skb_put_jumbo_subpacket, "PUT jumbo-sub") \
+ EM(rxrpc_skb_put_lose, "PUT lose ") \
+ EM(rxrpc_skb_put_purge, "PUT purge ") \
+ EM(rxrpc_skb_put_rotate, "PUT rotate ") \
+ EM(rxrpc_skb_put_unknown, "PUT unknown ") \
+ EM(rxrpc_skb_see_conn_work, "SEE conn-work") \
+ EM(rxrpc_skb_see_local_work, "SEE locl-work") \
+ EM(rxrpc_skb_see_recvmsg, "SEE recvmsg ") \
+ EM(rxrpc_skb_see_reject, "SEE reject ") \
+ EM(rxrpc_skb_see_rotate, "SEE rotate ") \
+ E_(rxrpc_skb_see_version, "SEE version ")

#define rxrpc_local_traces \
EM(rxrpc_local_free, "FREE ") \
@@ -583,33 +595,30 @@ TRACE_EVENT(rxrpc_call,
);

TRACE_EVENT(rxrpc_skb,
- TP_PROTO(struct sk_buff *skb, enum rxrpc_skb_trace op,
- int usage, int mod_count, const void *where),
+ TP_PROTO(struct sk_buff *skb, int usage, int mod_count,
+ enum rxrpc_skb_trace why),

- TP_ARGS(skb, op, usage, mod_count, where),
+ TP_ARGS(skb, usage, mod_count, why),

TP_STRUCT__entry(
__field(struct sk_buff *, skb )
- __field(enum rxrpc_skb_trace, op )
__field(int, usage )
__field(int, mod_count )
- __field(const void *, where )
+ __field(enum rxrpc_skb_trace, why )
),

TP_fast_assign(
__entry->skb = skb;
- __entry->op = op;
__entry->usage = usage;
__entry->mod_count = mod_count;
- __entry->where = where;
+ __entry->why = why;
),

- TP_printk("s=%p Rx %s u=%d m=%d p=%pSR",
+ TP_printk("s=%p Rx %s u=%d m=%d",
__entry->skb,
- __print_symbolic(__entry->op, rxrpc_skb_traces),
+ __print_symbolic(__entry->why, rxrpc_skb_traces),
__entry->usage,
- __entry->mod_count,
- __entry->where)
+ __entry->mod_count)
);

TRACE_EVENT(rxrpc_rx_packet,
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index 7ea26fc358a0..8021b9230c1e 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -153,7 +153,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
spin_lock_bh(&call->acks_ack_lock);
ack_skb = call->acks_soft_tbl;
if (ack_skb) {
- rxrpc_get_skb(ack_skb, rxrpc_skb_ack);
+ rxrpc_get_skb(ack_skb, rxrpc_skb_get_ack);
ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header);
}
spin_unlock_bh(&call->acks_ack_lock);
@@ -252,7 +252,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
no_further_resend:
spin_unlock(&call->tx_lock);
no_resend:
- rxrpc_free_skb(ack_skb, rxrpc_skb_freed);
+ rxrpc_free_skb(ack_skb, rxrpc_skb_put_ack);

resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
resend_at += jiffies + rxrpc_get_rto_backoff(call->peer,
diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
index afd957f6dc1c..815209673115 100644
--- a/net/rxrpc/call_object.c
+++ b/net/rxrpc/call_object.c
@@ -663,7 +663,7 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned);
}
rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned);
- rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_cleaned);
+ rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_put_ack);

call_rcu(&call->rcu, rxrpc_rcu_destroy_call);
}
diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index 817f895c77ca..49d885f73fa5 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -437,7 +437,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn)
/* go through the conn-level event packets, releasing the ref on this
* connection that each one has when we've finished with it */
while ((skb = skb_dequeue(&conn->rx_queue))) {
- rxrpc_see_skb(skb, rxrpc_skb_seen);
+ rxrpc_see_skb(skb, rxrpc_skb_see_conn_work);
ret = rxrpc_process_event(conn, skb, &abort_code);
switch (ret) {
case -EPROTO:
@@ -449,7 +449,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn)
goto requeue_and_leave;
case -ECONNABORTED:
default:
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_conn_work);
break;
}
}
@@ -463,7 +463,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn)
protocol_error:
if (rxrpc_abort_connection(conn, ret, abort_code) < 0)
goto requeue_and_leave;
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_conn_work);
return;
}

diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
index 09b44cd11c9b..ab8b7a1be935 100644
--- a/net/rxrpc/input.c
+++ b/net/rxrpc/input.c
@@ -485,7 +485,7 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb)
rxrpc_propose_ack_input_data);

err_free:
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
}

/*
@@ -513,7 +513,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb
kdebug("couldn't clone");
return false;
}
- rxrpc_new_skb(jskb, rxrpc_skb_cloned_jumbo);
+ rxrpc_new_skb(jskb, rxrpc_skb_new_jumbo_subpacket);
jsp = rxrpc_skb(jskb);
jsp->offset = offset;
jsp->len = RXRPC_JUMBO_DATALEN;
@@ -553,7 +553,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)

state = READ_ONCE(call->state);
if (state >= RXRPC_CALL_COMPLETE) {
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
return;
}

@@ -563,14 +563,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
if (sp->hdr.securityIndex != 0) {
struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC);
if (!nskb) {
- rxrpc_eaten_skb(skb, rxrpc_skb_unshared_nomem);
+ rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem);
return;
}

if (nskb != skb) {
- rxrpc_eaten_skb(skb, rxrpc_skb_received);
+ rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare);
skb = nskb;
- rxrpc_new_skb(skb, rxrpc_skb_unshared);
+ rxrpc_new_skb(skb, rxrpc_skb_new_unshared);
sp = rxrpc_skb(skb);
}
}
@@ -609,7 +609,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
rxrpc_notify_socket(call);

spin_unlock(&call->input_lock);
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
_leave(" [queued]");
}

@@ -994,8 +994,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
out:
spin_unlock(&call->input_lock);
out_not_locked:
- rxrpc_free_skb(skb_put, rxrpc_skb_freed);
- rxrpc_free_skb(skb_old, rxrpc_skb_freed);
+ rxrpc_free_skb(skb_put, rxrpc_skb_put_input);
+ rxrpc_free_skb(skb_old, rxrpc_skb_put_ack);
}

/*
@@ -1075,7 +1075,7 @@ static void rxrpc_input_call_packet(struct rxrpc_call *call,
break;
}

- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
no_free:
_leave("");
}
@@ -1137,7 +1137,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local,
skb_queue_tail(&local->event_queue, skb);
rxrpc_queue_local(local);
} else {
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
}
}

@@ -1150,7 +1150,7 @@ static void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
skb_queue_tail(&local->reject_queue, skb);
rxrpc_queue_local(local);
} else {
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
}
}

@@ -1228,7 +1228,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
if (skb->tstamp == 0)
skb->tstamp = ktime_get_real();

- rxrpc_new_skb(skb, rxrpc_skb_received);
+ rxrpc_new_skb(skb, rxrpc_skb_new_encap_rcv);

skb_pull(skb, sizeof(struct udphdr));

@@ -1245,7 +1245,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
static int lose;
if ((lose++ & 7) == 7) {
trace_rxrpc_rx_lose(sp);
- rxrpc_free_skb(skb, rxrpc_skb_lost);
+ rxrpc_free_skb(skb, rxrpc_skb_put_lose);
return 0;
}
}
@@ -1286,14 +1286,14 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
if (sp->hdr.securityIndex != 0) {
struct sk_buff *nskb = skb_unshare(skb, GFP_ATOMIC);
if (!nskb) {
- rxrpc_eaten_skb(skb, rxrpc_skb_unshared_nomem);
+ rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare_nomem);
goto out;
}

if (nskb != skb) {
- rxrpc_eaten_skb(skb, rxrpc_skb_received);
+ rxrpc_eaten_skb(skb, rxrpc_skb_eaten_by_unshare);
skb = nskb;
- rxrpc_new_skb(skb, rxrpc_skb_unshared);
+ rxrpc_new_skb(skb, rxrpc_skb_new_unshared);
sp = rxrpc_skb(skb);
}
}
@@ -1434,7 +1434,7 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
goto out;

discard:
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
out:
trace_rxrpc_rx_done(0, 0);
return 0;
diff --git a/net/rxrpc/local_event.c b/net/rxrpc/local_event.c
index f23a3fbabbda..c344383a20b2 100644
--- a/net/rxrpc/local_event.c
+++ b/net/rxrpc/local_event.c
@@ -88,7 +88,7 @@ void rxrpc_process_local_events(struct rxrpc_local *local)
if (skb) {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);

- rxrpc_see_skb(skb, rxrpc_skb_seen);
+ rxrpc_see_skb(skb, rxrpc_skb_see_local_work);
_debug("{%d},{%u}", local->debug_id, sp->hdr.type);

switch (sp->hdr.type) {
@@ -105,7 +105,7 @@ void rxrpc_process_local_events(struct rxrpc_local *local)
break;
}

- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
}

_leave("");
diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
index d324e88f7642..131c7a76fb06 100644
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -615,7 +615,7 @@ void rxrpc_reject_packets(struct rxrpc_local *local)
memset(&whdr, 0, sizeof(whdr));

while ((skb = skb_dequeue(&local->reject_queue))) {
- rxrpc_see_skb(skb, rxrpc_skb_seen);
+ rxrpc_see_skb(skb, rxrpc_skb_see_reject);
sp = rxrpc_skb(skb);

switch (skb->mark) {
@@ -631,7 +631,7 @@ void rxrpc_reject_packets(struct rxrpc_local *local)
ioc = 2;
break;
default:
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
continue;
}

@@ -656,7 +656,7 @@ void rxrpc_reject_packets(struct rxrpc_local *local)
rxrpc_tx_point_reject);
}

- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_input);
}

_leave("");
diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
index b28739d10927..f35cfc458dcf 100644
--- a/net/rxrpc/peer_event.c
+++ b/net/rxrpc/peer_event.c
@@ -158,12 +158,12 @@ void rxrpc_error_report(struct sock *sk)
_leave("UDP socket errqueue empty");
return;
}
- rxrpc_new_skb(skb, rxrpc_skb_received);
+ rxrpc_new_skb(skb, rxrpc_skb_new_error_report);
serr = SKB_EXT_ERR(skb);
if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) {
_leave("UDP empty message");
rcu_read_unlock();
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_error_report);
return;
}

@@ -172,7 +172,7 @@ void rxrpc_error_report(struct sock *sk)
peer = NULL;
if (!peer) {
rcu_read_unlock();
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_error_report);
_leave(" [no peer]");
return;
}
@@ -189,7 +189,7 @@ void rxrpc_error_report(struct sock *sk)
rxrpc_store_error(peer, serr);
out:
rcu_read_unlock();
- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_error_report);
rxrpc_put_peer(peer, rxrpc_peer_put_input_error);

_leave("");
diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
index c84d2b620396..bfac9e09347e 100644
--- a/net/rxrpc/recvmsg.c
+++ b/net/rxrpc/recvmsg.c
@@ -229,7 +229,7 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
_enter("%d", call->debug_id);

skb = skb_dequeue(&call->recvmsg_queue);
- rxrpc_see_skb(skb, rxrpc_skb_rotated);
+ rxrpc_see_skb(skb, rxrpc_skb_see_rotate);

sp = rxrpc_skb(skb);
tseq = sp->hdr.seq;
@@ -240,7 +240,7 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
if (after(tseq, call->rx_consumed))
smp_store_release(&call->rx_consumed, tseq);

- rxrpc_free_skb(skb, rxrpc_skb_freed);
+ rxrpc_free_skb(skb, rxrpc_skb_put_rotate);

trace_rxrpc_receive(call, last ? rxrpc_receive_rotate_last : rxrpc_receive_rotate,
serial, call->rx_consumed);
@@ -302,7 +302,7 @@ static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
*/
skb = skb_peek(&call->recvmsg_queue);
while (skb) {
- rxrpc_see_skb(skb, rxrpc_skb_seen);
+ rxrpc_see_skb(skb, rxrpc_skb_see_recvmsg);
sp = rxrpc_skb(skb);
seq = sp->hdr.seq;

diff --git a/net/rxrpc/skbuff.c b/net/rxrpc/skbuff.c
index 0c827d5bb2b8..ebe0c75e7b07 100644
--- a/net/rxrpc/skbuff.c
+++ b/net/rxrpc/skbuff.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
-/* ar-skbuff.c: socket buffer destruction handling
+/* Socket buffer accounting
*
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
* Written by David Howells ([email protected])
@@ -19,56 +19,50 @@
/*
* Note the allocation or reception of a socket buffer.
*/
-void rxrpc_new_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
+void rxrpc_new_skb(struct sk_buff *skb, enum rxrpc_skb_trace why)
{
- const void *here = __builtin_return_address(0);
int n = atomic_inc_return(select_skb_count(skb));
- trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here);
+ trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why);
}

/*
* Note the re-emergence of a socket buffer from a queue or buffer.
*/
-void rxrpc_see_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
+void rxrpc_see_skb(struct sk_buff *skb, enum rxrpc_skb_trace why)
{
- const void *here = __builtin_return_address(0);
if (skb) {
int n = atomic_read(select_skb_count(skb));
- trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here);
+ trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why);
}
}

/*
* Note the addition of a ref on a socket buffer.
*/
-void rxrpc_get_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
+void rxrpc_get_skb(struct sk_buff *skb, enum rxrpc_skb_trace why)
{
- const void *here = __builtin_return_address(0);
int n = atomic_inc_return(select_skb_count(skb));
- trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here);
+ trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why);
skb_get(skb);
}

/*
* Note the dropping of a ref on a socket buffer by the core.
*/
-void rxrpc_eaten_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
+void rxrpc_eaten_skb(struct sk_buff *skb, enum rxrpc_skb_trace why)
{
- const void *here = __builtin_return_address(0);
int n = atomic_inc_return(&rxrpc_n_rx_skbs);
- trace_rxrpc_skb(skb, op, 0, n, here);
+ trace_rxrpc_skb(skb, 0, n, why);
}

/*
* Note the destruction of a socket buffer.
*/
-void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
+void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace why)
{
- const void *here = __builtin_return_address(0);
if (skb) {
- int n;
- n = atomic_dec_return(select_skb_count(skb));
- trace_rxrpc_skb(skb, op, refcount_read(&skb->users), n, here);
+ int n = atomic_dec_return(select_skb_count(skb));
+ trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why);
kfree_skb(skb);
}
}
@@ -78,12 +72,12 @@ void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace op)
*/
void rxrpc_purge_queue(struct sk_buff_head *list)
{
- const void *here = __builtin_return_address(0);
struct sk_buff *skb;
+
while ((skb = skb_dequeue((list))) != NULL) {
int n = atomic_dec_return(select_skb_count(skb));
- trace_rxrpc_skb(skb, rxrpc_skb_purged,
- refcount_read(&skb->users), n, here);
+ trace_rxrpc_skb(skb, refcount_read(&skb->users), n,
+ rxrpc_skb_put_purge);
kfree_skb(skb);
}
}


2022-11-24 03:34:28

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 2

On Wed, 23 Nov 2022 10:06:20 +0000 David Howells wrote:
> [!] Note that these patches are based on a merge of a fix in net/master
> with net-next/master. The fix makes a number of conflicting changes,
> so it's better if this set is built on top of it.

Please post as RFC if the patches don't apply.

2022-11-24 07:53:21

by David Howells

[permalink] [raw]
Subject: Re: [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 2

What's the best way to base on a fix commit that's in net for patches in
net-next? Here I tried basing on a merge between them. Should I include the
fix patch on my net-next branch instead? Or will net be merged into net-next
at some point and I should wait for that?

Thanks,
David

2022-11-24 18:58:58

by Marc Dionne

[permalink] [raw]
Subject: Re: [PATCH net-next 01/13] rxrpc: Implement an in-kernel rxperf server for testing purposes

On Wed, Nov 23, 2022 at 6:06 AM David Howells <[email protected]> wrote:
>
> Implement an in-kernel rxperf server to allow kernel-based rxrpc services
> to be tested directly, unlike with AFS where they're accessed by the
> fileserver when the latter decides it wants to.
>
> This is implemented as a module that, if loaded, opens UDP port 7009
> (afs3-rmtsys) and listens on it for incoming calls. Calls can be generated
> using the rxperf command shipped with OpenAFS, for example.
>
> Signed-off-by: David Howells <[email protected]>
> cc: Marc Dionne <[email protected]>
> cc: [email protected]
> ---
>
> include/net/af_rxrpc.h | 1
> net/rxrpc/Kconfig | 7 +
> net/rxrpc/Makefile | 3
> net/rxrpc/rxperf.c | 614 ++++++++++++++++++++++++++++++++++++++++++++++++
> net/rxrpc/server_key.c | 25 ++
> 5 files changed, 650 insertions(+)
> create mode 100644 net/rxrpc/rxperf.c
>
> diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
> index b69ca695935c..dc033f08191e 100644
> --- a/include/net/af_rxrpc.h
> +++ b/include/net/af_rxrpc.h
> @@ -71,5 +71,6 @@ void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *,
> unsigned long);
>
> int rxrpc_sock_set_min_security_level(struct sock *sk, unsigned int val);
> +int rxrpc_sock_set_security_keyring(struct sock *, struct key *);
>
> #endif /* _NET_RXRPC_H */
> diff --git a/net/rxrpc/Kconfig b/net/rxrpc/Kconfig
> index accd35c05577..7ae023b37a83 100644
> --- a/net/rxrpc/Kconfig
> +++ b/net/rxrpc/Kconfig
> @@ -58,4 +58,11 @@ config RXKAD
>
> See Documentation/networking/rxrpc.rst.
>
> +config RXPERF
> + tristate "RxRPC test service"
> + help
> + Provide an rxperf service tester. This listens on UDP port 7009 for
> + incoming calls from the rxperf program (an example of which can be
> + found in OpenAFS).
> +
> endif
> diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile
> index fdeba488fc6e..79687477d93c 100644
> --- a/net/rxrpc/Makefile
> +++ b/net/rxrpc/Makefile
> @@ -36,3 +36,6 @@ rxrpc-y := \
> rxrpc-$(CONFIG_PROC_FS) += proc.o
> rxrpc-$(CONFIG_RXKAD) += rxkad.o
> rxrpc-$(CONFIG_SYSCTL) += sysctl.o
> +
> +
> +obj-$(CONFIG_RXPERF) += rxperf.o
> diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c
> new file mode 100644
> index 000000000000..7f8a1a186da8
> --- /dev/null
> +++ b/net/rxrpc/rxperf.c
> @@ -0,0 +1,614 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/* In-kernel rxperf server for testing purposes.
> + *
> + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
> + * Written by David Howells ([email protected])
> + */
> +
> +#define pr_fmt(fmt) "rxperf: " fmt
> +#include <linux/slab.h>
> +#include <net/sock.h>
> +//#include <net/netns/generic.h>
> +#include <net/af_rxrpc.h>
> +
> +#define RXPERF_PORT 7009
> +#define RX_PERF_SERVICE 147
> +#define RX_PERF_VERSION 3
> +#define RX_PERF_SEND 0
> +#define RX_PERF_RECV 1
> +#define RX_PERF_RPC 3
> +#define RX_PERF_FILE 4
> +#define RX_PERF_MAGIC_COOKIE 0x4711
> +
> +struct rxperf_proto_params {
> + __be32 version;
> + __be32 type;
> + __be32 rsize;
> + __be32 wsize;
> +} __packed;
> +
> +static const u8 rxperf_magic_cookie[] = { 0x00, 0x00, 0x47, 0x11 };
> +static const u8 secret[8] = { 0xa7, 0x83, 0x8a, 0xcb, 0xc7, 0x83, 0xec, 0x94 };
> +
> +enum rxperf_call_state {
> + RXPERF_CALL_SV_AWAIT_PARAMS, /* Server: Awaiting parameter block */
> + RXPERF_CALL_SV_AWAIT_REQUEST, /* Server: Awaiting request data */
> + RXPERF_CALL_SV_REPLYING, /* Server: Replying */
> + RXPERF_CALL_SV_AWAIT_ACK, /* Server: Awaiting final ACK */
> + RXPERF_CALL_COMPLETE, /* Completed or failed */
> +};
> +
> +struct rxperf_call {
> + struct rxrpc_call *rxcall;
> + struct iov_iter iter;
> + struct kvec kvec[1];
> + struct work_struct work;
> + const char *type;
> + size_t iov_len;
> + size_t req_len; /* Size of request blob */
> + size_t reply_len; /* Size of reply blob */
> + unsigned int debug_id;
> + unsigned int operation_id;
> + struct rxperf_proto_params params;
> + __be32 tmp[2];
> + s32 abort_code;
> + enum rxperf_call_state state;
> + short error;
> + unsigned short unmarshal;
> + u16 service_id;
> + int (*deliver)(struct rxperf_call *call);
> + void (*processor)(struct work_struct *work);
> +};
> +
> +static struct socket *rxperf_socket;
> +static struct key *rxperf_sec_keyring; /* Ring of security/crypto keys */
> +static struct workqueue_struct *rxperf_workqueue;
> +
> +static void rxperf_deliver_to_call(struct work_struct *work);
> +static int rxperf_deliver_param_block(struct rxperf_call *call);
> +static int rxperf_deliver_request(struct rxperf_call *call);
> +static int rxperf_process_call(struct rxperf_call *call);
> +static void rxperf_charge_preallocation(struct work_struct *work);
> +
> +static DECLARE_WORK(rxperf_charge_preallocation_work,
> + rxperf_charge_preallocation);
> +
> +static inline void rxperf_set_call_state(struct rxperf_call *call,
> + enum rxperf_call_state to)
> +{
> + call->state = to;
> +}
> +
> +static inline void rxperf_set_call_complete(struct rxperf_call *call,
> + int error, s32 remote_abort)
> +{
> + if (call->state != RXPERF_CALL_COMPLETE) {
> + call->abort_code = remote_abort;
> + call->error = error;
> + call->state = RXPERF_CALL_COMPLETE;
> + }
> +}
> +
> +static void rxperf_rx_discard_new_call(struct rxrpc_call *rxcall,
> + unsigned long user_call_ID)
> +{
> + kfree((struct rxperf_call *)user_call_ID);
> +}
> +
> +static void rxperf_rx_new_call(struct sock *sk, struct rxrpc_call *rxcall,
> + unsigned long user_call_ID)
> +{
> + queue_work(rxperf_workqueue, &rxperf_charge_preallocation_work);
> +}
> +
> +static void rxperf_queue_call_work(struct rxperf_call *call)
> +{
> + queue_work(rxperf_workqueue, &call->work);
> +}
> +
> +static void rxperf_notify_rx(struct sock *sk, struct rxrpc_call *rxcall,
> + unsigned long call_user_ID)
> +{
> + struct rxperf_call *call = (struct rxperf_call *)call_user_ID;
> +
> + if (call->state != RXPERF_CALL_COMPLETE)
> + rxperf_queue_call_work(call);
> +}
> +
> +static void rxperf_rx_attach(struct rxrpc_call *rxcall, unsigned long user_call_ID)
> +{
> + struct rxperf_call *call = (struct rxperf_call *)user_call_ID;
> +
> + call->rxcall = rxcall;
> +}
> +
> +static void rxperf_notify_end_reply_tx(struct sock *sock,
> + struct rxrpc_call *rxcall,
> + unsigned long call_user_ID)
> +{
> + rxperf_set_call_state((struct rxperf_call *)call_user_ID,
> + RXPERF_CALL_SV_AWAIT_ACK);
> +}
> +
> +/*
> + * Charge the incoming call preallocation.
> + */
> +static void rxperf_charge_preallocation(struct work_struct *work)
> +{
> + struct rxperf_call *call;
> +
> + for (;;) {
> + call = kzalloc(sizeof(*call), GFP_KERNEL);
> + if (!call)
> + break;
> +
> + call->type = "unset";
> + call->debug_id = atomic_inc_return(&rxrpc_debug_id);
> + call->deliver = rxperf_deliver_param_block;
> + call->state = RXPERF_CALL_SV_AWAIT_PARAMS;
> + call->service_id = RX_PERF_SERVICE;
> + call->iov_len = sizeof(call->params);
> + call->kvec[0].iov_len = sizeof(call->params);
> + call->kvec[0].iov_base = &call->params;
> + iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
> + INIT_WORK(&call->work, rxperf_deliver_to_call);
> +
> + if (rxrpc_kernel_charge_accept(rxperf_socket,
> + rxperf_notify_rx,
> + rxperf_rx_attach,
> + (unsigned long)call,
> + GFP_KERNEL,
> + call->debug_id) < 0)
> + break;
> + call = NULL;
> + }
> +
> + kfree(call);
> +}
> +
> +/*
> + * Open an rxrpc socket and bind it to be a server for callback notifications
> + * - the socket is left in blocking mode and non-blocking ops use MSG_DONTWAIT
> + */
> +static int rxperf_open_socket(void)
> +{
> + struct sockaddr_rxrpc srx;
> + struct socket *socket;
> + int ret;
> +
> + ret = sock_create_kern(&init_net, AF_RXRPC, SOCK_DGRAM, PF_INET6,
> + &socket);
> + if (ret < 0)
> + goto error_1;
> +
> + socket->sk->sk_allocation = GFP_NOFS;
> +
> + /* bind the callback manager's address to make this a server socket */
> + memset(&srx, 0, sizeof(srx));
> + srx.srx_family = AF_RXRPC;
> + srx.srx_service = RX_PERF_SERVICE;
> + srx.transport_type = SOCK_DGRAM;
> + srx.transport_len = sizeof(srx.transport.sin6);
> + srx.transport.sin6.sin6_family = AF_INET6;
> + srx.transport.sin6.sin6_port = htons(RXPERF_PORT);
> +
> + ret = rxrpc_sock_set_min_security_level(socket->sk,
> + RXRPC_SECURITY_ENCRYPT);
> + if (ret < 0)
> + goto error_2;
> +
> + ret = rxrpc_sock_set_security_keyring(socket->sk, rxperf_sec_keyring);
> +
> + ret = kernel_bind(socket, (struct sockaddr *)&srx, sizeof(srx));
> + if (ret < 0)
> + goto error_2;
> +
> + rxrpc_kernel_new_call_notification(socket, rxperf_rx_new_call,
> + rxperf_rx_discard_new_call);
> +
> + ret = kernel_listen(socket, INT_MAX);
> + if (ret < 0)
> + goto error_2;
> +
> + rxperf_socket = socket;
> + rxperf_charge_preallocation(&rxperf_charge_preallocation_work);
> + return 0;
> +
> +error_2:
> + sock_release(socket);
> +error_1:
> + pr_err("Can't set up rxperf socket: %d\n", ret);
> + return ret;
> +}
> +
> +/*
> + * close the rxrpc socket rxperf was using
> + */
> +static void rxperf_close_socket(void)
> +{
> + kernel_listen(rxperf_socket, 0);
> + kernel_sock_shutdown(rxperf_socket, SHUT_RDWR);
> + flush_workqueue(rxperf_workqueue);
> + sock_release(rxperf_socket);
> +}
> +
> +/*
> + * Log remote abort codes that indicate that we have a protocol disagreement
> + * with the server.
> + */
> +static void rxperf_log_error(struct rxperf_call *call, s32 remote_abort)
> +{
> + static int max = 0;
> + const char *msg;
> + int m;
> +
> + switch (remote_abort) {
> + case RX_EOF: msg = "unexpected EOF"; break;
> + case RXGEN_CC_MARSHAL: msg = "client marshalling"; break;
> + case RXGEN_CC_UNMARSHAL: msg = "client unmarshalling"; break;
> + case RXGEN_SS_MARSHAL: msg = "server marshalling"; break;
> + case RXGEN_SS_UNMARSHAL: msg = "server unmarshalling"; break;
> + case RXGEN_DECODE: msg = "opcode decode"; break;
> + case RXGEN_SS_XDRFREE: msg = "server XDR cleanup"; break;
> + case RXGEN_CC_XDRFREE: msg = "client XDR cleanup"; break;
> + case -32: msg = "insufficient data"; break;
> + default:
> + return;
> + }
> +
> + m = max;
> + if (m < 3) {
> + max = m + 1;
> + pr_info("Peer reported %s failure on %s\n", msg, call->type);
> + }
> +}
> +
> +/*
> + * deliver messages to a call
> + */
> +static void rxperf_deliver_to_call(struct work_struct *work)
> +{
> + struct rxperf_call *call = container_of(work, struct rxperf_call, work);
> + enum rxperf_call_state state;
> + u32 abort_code, remote_abort = 0;
> + int ret;
> +
> + if (call->state == RXPERF_CALL_COMPLETE)
> + return;
> +
> + while (state = call->state,
> + state == RXPERF_CALL_SV_AWAIT_PARAMS ||
> + state == RXPERF_CALL_SV_AWAIT_REQUEST ||
> + state == RXPERF_CALL_SV_AWAIT_ACK
> + ) {
> + if (state == RXPERF_CALL_SV_AWAIT_ACK) {
> + if (!rxrpc_kernel_check_life(rxperf_socket, call->rxcall))
> + goto call_complete;
> + return;
> + }
> +
> + ret = call->deliver(call);
> + if (ret == 0)
> + ret = rxperf_process_call(call);
> +
> + switch (ret) {
> + case 0:
> + continue;
> + case -EINPROGRESS:
> + case -EAGAIN:
> + return;
> + case -ECONNABORTED:
> + rxperf_log_error(call, call->abort_code);
> + goto call_complete;
> + case -EOPNOTSUPP:
> + abort_code = RXGEN_OPCODE;
> + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
> + abort_code, ret, "GOP");
> + goto call_complete;
> + case -ENOTSUPP:
> + abort_code = RX_USER_ABORT;
> + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
> + abort_code, ret, "GUA");
> + goto call_complete;
> + case -EIO:
> + pr_err("Call %u in bad state %u\n",
> + call->debug_id, call->state);
> + fallthrough;
> + case -ENODATA:
> + case -EBADMSG:
> + case -EMSGSIZE:
> + case -ENOMEM:
> + case -EFAULT:
> + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
> + RXGEN_SS_UNMARSHAL, ret, "GUM");
> + goto call_complete;
> + default:
> + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
> + RX_CALL_DEAD, ret, "GER");
> + goto call_complete;
> + }
> + }
> +
> +call_complete:
> + rxperf_set_call_complete(call, ret, remote_abort);
> + /* The call may have been requeued */
> + rxrpc_kernel_end_call(rxperf_socket, call->rxcall);
> + cancel_work(&call->work);
> + kfree(call);
> +}
> +
> +/*
> + * Extract a piece of data from the received data socket buffers.
> + */
> +static int rxperf_extract_data(struct rxperf_call *call, bool want_more)
> +{
> + u32 remote_abort = 0;
> + int ret;
> +
> + ret = rxrpc_kernel_recv_data(rxperf_socket, call->rxcall, &call->iter,
> + &call->iov_len, want_more, &remote_abort,
> + &call->service_id);
> + pr_debug("Extract i=%zu l=%zu m=%u ret=%d\n",
> + iov_iter_count(&call->iter), call->iov_len, want_more, ret);
> + if (ret == 0 || ret == -EAGAIN)
> + return ret;
> +
> + if (ret == 1) {
> + switch (call->state) {
> + case RXPERF_CALL_SV_AWAIT_REQUEST:
> + rxperf_set_call_state(call, RXPERF_CALL_SV_REPLYING);
> + break;
> + case RXPERF_CALL_COMPLETE:
> + pr_debug("premature completion %d", call->error);
> + return call->error;
> + default:
> + break;
> + }
> + return 0;
> + }
> +
> + rxperf_set_call_complete(call, ret, remote_abort);
> + return ret;
> +}
> +
> +/*
> + * Grab the operation ID from an incoming manager call.
> + */
> +static int rxperf_deliver_param_block(struct rxperf_call *call)
> +{
> + u32 version;
> + int ret;
> +
> + /* Extract the parameter block */
> + ret = rxperf_extract_data(call, true);
> + if (ret < 0)
> + return ret;
> +
> + version = ntohl(call->params.version);
> + call->operation_id = ntohl(call->params.type);
> + call->deliver = rxperf_deliver_request;
> +
> + if (version != RX_PERF_VERSION) {
> + pr_info("Version mismatch %x\n", version);
> + return -ENOTSUPP;
> + }
> +
> + switch (call->operation_id) {
> + case RX_PERF_SEND:
> + call->type = "send";
> + call->reply_len = 0;
> + call->iov_len = 4; /* Expect req size */
> + break;
> + case RX_PERF_RECV:
> + call->type = "recv";
> + call->req_len = 0;
> + call->iov_len = 4; /* Expect reply size */
> + break;
> + case RX_PERF_RPC:
> + call->type = "rpc";
> + call->iov_len = 8; /* Expect req size and reply size */
> + break;
> + case RX_PERF_FILE:
> + call->type = "file";
> + fallthrough;
> + default:
> + return -EOPNOTSUPP;
> + }
> +
> + rxperf_set_call_state(call, RXPERF_CALL_SV_AWAIT_REQUEST);
> + return call->deliver(call);
> +}
> +
> +/*
> + * Deliver the request data.
> + */
> +static int rxperf_deliver_request(struct rxperf_call *call)
> +{
> + int ret;
> +
> + switch (call->unmarshal) {
> + case 0:
> + call->kvec[0].iov_len = call->iov_len;
> + call->kvec[0].iov_base = call->tmp;
> + iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
> + call->unmarshal++;
> + fallthrough;
> + case 1:
> + ret = rxperf_extract_data(call, true);
> + if (ret < 0)
> + return ret;
> +
> + switch (call->operation_id) {
> + case RX_PERF_SEND:
> + call->type = "send";
> + call->req_len = ntohl(call->tmp[0]);
> + call->reply_len = 0;
> + break;
> + case RX_PERF_RECV:
> + call->type = "recv";
> + call->req_len = 0;
> + call->reply_len = ntohl(call->tmp[0]);
> + break;
> + case RX_PERF_RPC:
> + call->type = "rpc";
> + call->req_len = ntohl(call->tmp[0]);
> + call->reply_len = ntohl(call->tmp[1]);
> + break;
> + default:
> + pr_info("Can't parse extra params\n");
> + return -EIO;
> + }
> +
> + pr_debug("CALL op=%s rq=%zx rp=%zx\n",
> + call->type, call->req_len, call->reply_len);
> +
> + call->iov_len = call->req_len;
> + iov_iter_discard(&call->iter, READ, call->req_len);
> + call->unmarshal++;
> + fallthrough;
> + case 2:
> + ret = rxperf_extract_data(call, false);
> + if (ret < 0)
> + return ret;
> + call->unmarshal++;
> + fallthrough;
> + default:
> + return 0;
> + }
> +}
> +
> +/*
> + * Process a call for which we've received the request.
> + */
> +static int rxperf_process_call(struct rxperf_call *call)
> +{
> + struct msghdr msg = {};
> + struct bio_vec bv[1];
> + struct kvec iov[1];
> + ssize_t n;
> + size_t reply_len = call->reply_len, len;
> +
> + rxrpc_kernel_set_tx_length(rxperf_socket, call->rxcall,
> + reply_len + sizeof(rxperf_magic_cookie));
> +
> + while (reply_len > 0) {
> + len = min(reply_len, PAGE_SIZE);
> + bv[0].bv_page = ZERO_PAGE(0);
> + bv[0].bv_offset = 0;
> + bv[0].bv_len = len;
> + iov_iter_bvec(&msg.msg_iter, WRITE, bv, 1, len);
> + msg.msg_flags = MSG_MORE;
> + n = rxrpc_kernel_send_data(rxperf_socket, call->rxcall, &msg,
> + len, rxperf_notify_end_reply_tx);
> + if (n < 0)
> + return n;
> + if (n == 0)
> + return -EIO;
> + reply_len -= n;
> + }
> +
> + len = sizeof(rxperf_magic_cookie);
> + iov[0].iov_base = (void *)rxperf_magic_cookie;
> + iov[0].iov_len = len;
> + iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len);
> + msg.msg_flags = 0;
> + n = rxrpc_kernel_send_data(rxperf_socket, call->rxcall, &msg, len,
> + rxperf_notify_end_reply_tx);
> + if (n >= 0)
> + return 0; /* Success */
> +
> + if (n == -ENOMEM)
> + rxrpc_kernel_abort_call(rxperf_socket, call->rxcall,
> + RXGEN_SS_MARSHAL, -ENOMEM, "GOM");
> + return n;
> +}
> +
> +/*
> + * Add a key to the security keyring.
> + */
> +static int rxperf_add_key(struct key *keyring)
> +{
> + key_ref_t kref;
> + int ret;
> +
> + kref = key_create_or_update(make_key_ref(keyring, true),
> + "rxrpc_s",
> + __stringify(RX_PERF_SERVICE) ":2",
> + secret,
> + sizeof(secret),
> + KEY_POS_VIEW | KEY_POS_READ | KEY_POS_SEARCH
> + | KEY_USR_VIEW,
> + KEY_ALLOC_NOT_IN_QUOTA);
> +
> + if (IS_ERR(kref)) {
> + pr_err("Can't allocate rxperf server key: %ld\n", PTR_ERR(kref));
> + return PTR_ERR(kref);
> + }
> +
> + ret = key_link(keyring, key_ref_to_ptr(kref));
> + if (ret < 0)
> + pr_err("Can't link rxperf server key: %d\n", ret);
> + key_ref_put(kref);
> + return ret;
> +}
> +
> +/*
> + * Initialise the rxperf server.
> + */
> +static int __init rxperf_init(void)
> +{
> + struct key *keyring;
> + int ret = -ENOMEM;
> +
> + pr_info("Server registering\n");
> +
> + rxperf_workqueue = alloc_workqueue("rxperf", 0, 0);
> + if (!rxperf_workqueue)
> + goto error_workqueue;
> +
> + keyring = keyring_alloc("rxperf_server",
> + GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(),
> + KEY_POS_VIEW | KEY_POS_READ | KEY_POS_SEARCH |
> + KEY_POS_WRITE |
> + KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH |
> + KEY_USR_WRITE |
> + KEY_OTH_VIEW | KEY_OTH_READ | KEY_OTH_SEARCH,
> + KEY_ALLOC_NOT_IN_QUOTA,
> + NULL, NULL);
> + if (IS_ERR(keyring)) {
> + pr_err("Can't allocate rxperf server keyring: %ld\n",
> + PTR_ERR(keyring));
> + goto error_keyring;
> + }
> + rxperf_sec_keyring = keyring;
> + ret = rxperf_add_key(keyring);
> + if (ret < 0)
> + goto error_key;
> +
> + ret = rxperf_open_socket();
> + if (ret < 0)
> + goto error_socket;
> + return 0;
> +
> +error_socket:
> +error_key:
> + key_put(rxperf_sec_keyring);
> +error_keyring:
> + destroy_workqueue(rxperf_workqueue);
> + rcu_barrier();
> +error_workqueue:
> + pr_err("Failed to register: %d\n", ret);
> + return ret;
> +}
> +late_initcall(rxperf_init); /* Must be called after net/ to create socket */
> +
> +static void __exit rxperf_exit(void)
> +{
> + pr_info("Server unregistering.\n");
> +
> + rxperf_close_socket();
> + key_put(rxperf_sec_keyring);
> + destroy_workqueue(rxperf_workqueue);
> + rcu_barrier();
> +}
> +module_exit(rxperf_exit);
> diff --git a/net/rxrpc/server_key.c b/net/rxrpc/server_key.c
> index ee269e0e6ee8..e51940589ee5 100644
> --- a/net/rxrpc/server_key.c
> +++ b/net/rxrpc/server_key.c
> @@ -144,3 +144,28 @@ int rxrpc_server_keyring(struct rxrpc_sock *rx, sockptr_t optval, int optlen)
> _leave(" = 0 [key %x]", key->serial);
> return 0;
> }
> +
> +/**
> + * rxrpc_sock_set_security_keyring - Set the security keyring for a kernel service
> + * @sk: The socket to set the keyring on
> + * @keyring: The keyring to set
> + *
> + * Set the server security keyring on an rxrpc socket. This is used to provide
> + * the encryption keys for a kernel service.
> + */
> +int rxrpc_sock_set_security_keyring(struct sock *sk, struct key *keyring)
> +{
> + struct rxrpc_sock *rx = rxrpc_sk(sk);
> + int ret = 0;
> +
> + lock_sock(sk);
> + if (rx->securities)
> + ret = -EINVAL;
> + else if (rx->sk.sk_state != RXRPC_UNBOUND)
> + ret = -EISCONN;
> + else
> + rx->securities = key_get(keyring);
> + release_sock(sk);
> + return ret;
> +}
> +EXPORT_SYMBOL(rxrpc_sock_set_security_keyring);

This will need some MODULE_* definitions when compiled as a module.

Could the port be made configurable, perhaps as a module parameter,
with 7009 as the default?

Marc

2022-11-28 19:51:13

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 2

On Thu, 24 Nov 2022 07:08:08 +0000 David Howells wrote:
> What's the best way to base on a fix commit that's in net for patches in
> net-next? Here I tried basing on a merge between them. Should I include the
> fix patch on my net-next branch instead? Or will net be merged into net-next
> at some point and I should wait for that?

We merge net -> net-next each Thursday afternoon (LT / Linus Time)
so if the wait is for something in net then we generally ask for folks
to just hold off posting until the merge. If the dependency is the
other way then just post based on what's in tree and provide the
conflict resolution in the cover letter.

2022-11-28 20:56:11

by David Howells

[permalink] [raw]
Subject: Re: [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 2

Jakub Kicinski <[email protected]> wrote:

> On Thu, 24 Nov 2022 07:08:08 +0000 David Howells wrote:
> > What's the best way to base on a fix commit that's in net for patches in
> > net-next? Here I tried basing on a merge between them. Should I include
> > the fix patch on my net-next branch instead? Or will net be merged into
> > net-next at some point and I should wait for that?
>
> We merge net -> net-next each Thursday afternoon (LT / Linus Time)
> so if the wait is for something in net then we generally ask for folks
> to just hold off posting until the merge. If the dependency is the
> other way then just post based on what's in tree and provide the
> conflict resolution in the cover letter.

Ok. I guess last Thursday was skipped because of Thanksgiving.

David

2022-11-28 21:05:41

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 00/13] rxrpc: Increasing SACK size and moving away from softirq, part 2

On Mon, 28 Nov 2022 20:16:34 +0000 David Howells wrote:
> > We merge net -> net-next each Thursday afternoon (LT / Linus Time)
> > so if the wait is for something in net then we generally ask for folks
> > to just hold off posting until the merge. If the dependency is the
> > other way then just post based on what's in tree and provide the
> > conflict resolution in the cover letter.
>
> Ok. I guess last Thursday was skipped because of Thanksgiving.

Really? Ugh. I wasn't clear enough in communicating to other
maintainers that I'll be out, I guess.

I'll try to send another PR later today, once I caught up with all
the traffic, and marge things up properly. Stay tuned.