Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752007AbdGaXEQ (ORCPT ); Mon, 31 Jul 2017 19:04:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:51960 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751747AbdGaW5j (ORCPT ); Mon, 31 Jul 2017 18:57:39 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5894A22C87 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=sstabellini@kernel.org From: Stefano Stabellini To: xen-devel@lists.xen.org Cc: linux-kernel@vger.kernel.org, sstabellini@kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, Stefano Stabellini Subject: [PATCH v3 04/13] xen/pvcalls: implement socket command and handle events Date: Mon, 31 Jul 2017 15:57:26 -0700 Message-Id: <1501541855-7354-4-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1501541855-7354-1-git-send-email-sstabellini@kernel.org> References: <1501541855-7354-1-git-send-email-sstabellini@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6271 Lines: 202 Send a PVCALLS_SOCKET command to the backend, use the masked req_prod_pvt as req_id. This way, req_id is guaranteed to be between 0 and PVCALLS_NR_REQ_PER_RING. We already have a slot in the rsp array ready for the response, and there cannot be two outstanding responses with the same req_id. Wait for the response by waiting on the inflight_req waitqueue and check for the req_id field in rsp[req_id]. Use atomic accesses and barriers to read the field. Note that the barriers are simple smp barriers (as opposed to virt barriers) because they are for internal frontend synchronization, not frontend<->backend communication. Once a response is received, clear the corresponding rsp slot by setting req_id to PVCALLS_INVALID_ID. Note that PVCALLS_INVALID_ID is invalid only from the frontend point of view. It is not part of the PVCalls protocol. pvcalls_front_event_handler is in charge of copying responses from the ring to the appropriate rsp slot. It is done by copying the body of the response first, then by copying req_id atomically. After the copies, wake up anybody waiting on waitqueue. pvcallss_lock protects accesses to the ring. Create a new struct sock_mapping and convert the pointer into an uint64_t and use it as id for the new socket to pass to the backend. The struct will be fully initialized later on connect or bind. In this patch the struct sock_mapping is empty, the fields will be added by the next patch. sock->sk->sk_send_head is not used for ip sockets: reuse the field to store a pointer to the struct sock_mapping corresponding to the socket. This way, we can easily get the struct sock_mapping from the struct socket. Signed-off-by: Stefano Stabellini CC: boris.ostrovsky@oracle.com CC: jgross@suse.com --- drivers/xen/pvcalls-front.c | 119 ++++++++++++++++++++++++++++++++++++++++++++ drivers/xen/pvcalls-front.h | 8 +++ 2 files changed, 127 insertions(+) create mode 100644 drivers/xen/pvcalls-front.h diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c index 2afe36d..7c4a7cb 100644 --- a/drivers/xen/pvcalls-front.c +++ b/drivers/xen/pvcalls-front.c @@ -20,6 +20,8 @@ #include #include +#include "pvcalls-front.h" + #define PVCALLS_INVALID_ID UINT_MAX #define PVCALLS_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER #define PVCALLS_NR_REQ_PER_RING __CONST_RING_SIZE(xen_pvcalls, XEN_PAGE_SIZE) @@ -38,11 +40,128 @@ struct pvcalls_bedata { }; static struct xenbus_device *pvcalls_front_dev; +struct sock_mapping { + bool active_socket; + struct list_head list; + struct socket *sock; +}; + +static inline int get_request(struct pvcalls_bedata *bedata, int *req_id) +{ + *req_id = bedata->ring.req_prod_pvt & (RING_SIZE(&bedata->ring) - 1); + if (RING_FULL(&bedata->ring) || + READ_ONCE(bedata->rsp[*req_id].req_id) != PVCALLS_INVALID_ID) + return -EAGAIN; + return 0; +} + static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id) { + struct xenbus_device *dev = dev_id; + struct pvcalls_bedata *bedata; + struct xen_pvcalls_response *rsp; + uint8_t *src, *dst; + int req_id = 0, more = 0, done = 0; + + if (dev == NULL) + return IRQ_HANDLED; + + bedata = dev_get_drvdata(&dev->dev); + if (bedata == NULL) + return IRQ_HANDLED; + +again: + while (RING_HAS_UNCONSUMED_RESPONSES(&bedata->ring)) { + rsp = RING_GET_RESPONSE(&bedata->ring, bedata->ring.rsp_cons); + + req_id = rsp->req_id; + dst = (uint8_t *)&bedata->rsp[req_id] + sizeof(rsp->req_id); + src = (uint8_t *)rsp + sizeof(rsp->req_id); + memcpy(dst, src, sizeof(*rsp) - sizeof(rsp->req_id)); + /* + * First copy the rest of the data, then req_id. It is + * paired with the barrier when accessing bedata->rsp. + */ + smp_wmb(); + WRITE_ONCE(bedata->rsp[req_id].req_id, rsp->req_id); + + done = 1; + bedata->ring.rsp_cons++; + } + + RING_FINAL_CHECK_FOR_RESPONSES(&bedata->ring, more); + if (more) + goto again; + if (done) + wake_up(&bedata->inflight_req); return IRQ_HANDLED; } +int pvcalls_front_socket(struct socket *sock) +{ + struct pvcalls_bedata *bedata; + struct sock_mapping *map = NULL; + struct xen_pvcalls_request *req; + int notify, req_id, ret; + + if (!pvcalls_front_dev) + return -EACCES; + /* + * PVCalls only supports domain AF_INET, + * type SOCK_STREAM and protocol 0 sockets for now. + * + * Check socket type here, AF_INET and protocol checks are done + * by the caller. + */ + if (sock->type != SOCK_STREAM) + return -ENOTSUPP; + + bedata = dev_get_drvdata(&pvcalls_front_dev->dev); + + map = kzalloc(sizeof(*map), GFP_KERNEL); + if (map == NULL) + return -ENOMEM; + /* + * sock->sk->sk_send_head is not used for ip sockets: reuse the + * field to store a pointer to the struct sock_mapping + * corresponding to the socket. This way, we can easily get the + * struct sock_mapping from the struct socket. + */ + WRITE_ONCE(sock->sk->sk_send_head, (void *)map); + + spin_lock(&bedata->pvcallss_lock); + list_add_tail(&map->list, &bedata->socket_mappings); + + ret = get_request(bedata, &req_id); + if (ret < 0) { + spin_unlock(&bedata->pvcallss_lock); + return ret; + } + req = RING_GET_REQUEST(&bedata->ring, req_id); + req->req_id = req_id; + req->cmd = PVCALLS_SOCKET; + req->u.socket.id = (uint64_t) map; + req->u.socket.domain = AF_INET; + req->u.socket.type = SOCK_STREAM; + req->u.socket.protocol = 0; + + bedata->ring.req_prod_pvt++; + RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&bedata->ring, notify); + spin_unlock(&bedata->pvcallss_lock); + if (notify) + notify_remote_via_irq(bedata->irq); + + wait_event(bedata->inflight_req, + READ_ONCE(bedata->rsp[req_id].req_id) == req_id); + + ret = bedata->rsp[req_id].ret; + /* read ret, then set this rsp slot to be reused */ + smp_mb(); + WRITE_ONCE(bedata->rsp[req_id].req_id, PVCALLS_INVALID_ID); + + return ret; +} + static const struct xenbus_device_id pvcalls_front_ids[] = { { "pvcalls" }, { "" } diff --git a/drivers/xen/pvcalls-front.h b/drivers/xen/pvcalls-front.h new file mode 100644 index 0000000..b7dabed --- /dev/null +++ b/drivers/xen/pvcalls-front.h @@ -0,0 +1,8 @@ +#ifndef __PVCALLS_FRONT_H__ +#define __PVCALLS_FRONT_H__ + +#include + +int pvcalls_front_socket(struct socket *sock); + +#endif -- 1.9.1