Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp705185imm; Fri, 14 Sep 2018 05:13:29 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYNO+YIQyNLNnT6k8JNF168AIKdOLoDLWg/Iwen24f4/HmB1J9PyfiDdF4L3weNXsMyDqbm X-Received: by 2002:a63:1d3:: with SMTP id 202-v6mr11650972pgb.136.1536927209394; Fri, 14 Sep 2018 05:13:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536927209; cv=none; d=google.com; s=arc-20160816; b=uDNvnwF12yr0BLiqMUzF8MUe/pf3ojLTFxwbi6S3yhDYNXZvOkfJs5jj5sf2vMeqHn udyVRw6oAGgvr8xOxK0/59AtA8FCNswjSbYxcZFV/4E/HA4x6uwa9Woyl+2b34JWikk3 egk1tP1BsXspI/2B1yWHmzNJ4PZ/vfk69HkfZqoU+HVAkq89wHdaZxRCwAHnbitOkbLU PgoajRKo8tY0UmZ6T9ptQpDzsZ4PzhMyfkO9GTH/zBYyniy7dciVwUdoUidLs3yX/H8D A+U6c/DNHbWos0Xx4hIB7VnkqfZHlvU2fMDJz6YZrBTuTaHpMZMok8U6PQfK4LIzBPQc mfKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date; bh=IMR/UWBA5FhaVkuhrk9oGb7m8flpi/oDBxti5riAMk8=; b=CZEeTNDGoseyY4cGyVnY2cCK8Slm7ttG7Wew66XGntpFu5f/hrO5voXCwisR033b7h RnNHW/GizuKZ1yGd3c7mFR2yDJaipgFVuSXZEZdzmXBfInLGH27dLDnhH5PzPbrS9ln0 1ylQcpKsVX4CyDxl2f1e4uEqUPr7/IenZ4MvIT989sMDFlOyWoDhPa3SSiCXmJ0pbeW2 BAm1fcfkqC82YvPvGPixU5GI0YtQ2IKCkzzMNMhR992rnL3uNsroSThTGSWGWrJTGk0n IYXmY/LDkcjxBozbU4lFvT4cZuxelo9xIlAqoRrZjdpu/6gZbq6J40GUxPUKd2NQp+Qe QbkQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k20-v6si7100603pgg.7.2018.09.14.05.13.13; Fri, 14 Sep 2018 05:13:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727839AbeINR1U (ORCPT + 99 others); Fri, 14 Sep 2018 13:27:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59236 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727013AbeINR1U (ORCPT ); Fri, 14 Sep 2018 13:27:20 -0400 Received: from smtp.corp.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 7F80AA8AD; Fri, 14 Sep 2018 12:13:07 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7149530912F4; Fri, 14 Sep 2018 12:13:06 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 1191818005D0; Fri, 14 Sep 2018 12:13:06 +0000 (UTC) Date: Fri, 14 Sep 2018 08:13:05 -0400 (EDT) From: Pankaj Gupta To: Luiz Capitulino Cc: kwolf@redhat.com, jack@suse.cz, xiaoguangrong eric , kvm@vger.kernel.org, riel@surriel.com, linux-nvdimm@ml01.01.org, david@redhat.com, ross zwisler , linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, hch@infradead.org, imammedo@redhat.com, mst@redhat.com, stefanha@redhat.com, niteshnarayanlal@hotmail.com, pbonzini@redhat.com, dan j williams , nilal@redhat.com Message-ID: <58609586.13187764.1536927185694.JavaMail.zimbra@redhat.com> In-Reply-To: <20180913081932.14783262@doriath> References: <20180831133019.27579-1-pagupta@redhat.com> <20180831133019.27579-4-pagupta@redhat.com> <20180912125423.3ad0e034@doriath> <831225077.12817716.1536821901550.JavaMail.zimbra@redhat.com> <20180913081932.14783262@doriath> Subject: Re: [Qemu-devel] [PATCH 3/3] virtio-pmem: Add virtio pmem driver MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.65.193.76, 10.4.195.22] Thread-Topic: virtio-pmem: Add virtio pmem driver Thread-Index: uvgqmQeLYI+3A8ywwjZxfRMuUqP7yw== X-Scanned-By: MIMEDefang 2.84 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Fri, 14 Sep 2018 12:13:07 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > > Hi Luiz, > > > > Thanks for the review. > > > > > > > > > This patch adds virtio-pmem driver for KVM guest. > > > > > > > > Guest reads the persistent memory range information from > > > > Qemu over VIRTIO and registers it on nvdimm_bus. It also > > > > creates a nd_region object with the persistent memory > > > > range information so that existing 'nvdimm/pmem' driver > > > > can reserve this into system memory map. This way > > > > 'virtio-pmem' driver uses existing functionality of pmem > > > > driver to register persistent memory compatible for DAX > > > > capable filesystems. > > > > > > > > This also provides function to perform guest flush over > > > > VIRTIO from 'pmem' driver when userspace performs flush > > > > on DAX memory range. > > > > > > > > Signed-off-by: Pankaj Gupta > > > > --- > > > > drivers/virtio/Kconfig | 9 ++ > > > > drivers/virtio/Makefile | 1 + > > > > drivers/virtio/virtio_pmem.c | 255 > > > > +++++++++++++++++++++++++++++++++++++++ > > > > include/uapi/linux/virtio_ids.h | 1 + > > > > include/uapi/linux/virtio_pmem.h | 40 ++++++ > > > > 5 files changed, 306 insertions(+) > > > > create mode 100644 drivers/virtio/virtio_pmem.c > > > > create mode 100644 include/uapi/linux/virtio_pmem.h > > > > > > > > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig > > > > index 3589764..a331e23 100644 > > > > --- a/drivers/virtio/Kconfig > > > > +++ b/drivers/virtio/Kconfig > > > > @@ -42,6 +42,15 @@ config VIRTIO_PCI_LEGACY > > > > > > > > If unsure, say Y. > > > > > > > > +config VIRTIO_PMEM > > > > + tristate "Support for virtio pmem driver" > > > > + depends on VIRTIO > > > > + help > > > > + This driver provides support for virtio based flushing interface > > > > + for persistent memory range. > > > > + > > > > + If unsure, say M. > > > > + > > > > config VIRTIO_BALLOON > > > > tristate "Virtio balloon driver" > > > > depends on VIRTIO > > > > diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile > > > > index 3a2b5c5..cbe91c6 100644 > > > > --- a/drivers/virtio/Makefile > > > > +++ b/drivers/virtio/Makefile > > > > @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o > > > > virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o > > > > obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o > > > > obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o > > > > +obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o > > > > diff --git a/drivers/virtio/virtio_pmem.c > > > > b/drivers/virtio/virtio_pmem.c > > > > new file mode 100644 > > > > index 0000000..c22cc87 > > > > --- /dev/null > > > > +++ b/drivers/virtio/virtio_pmem.c > > > > @@ -0,0 +1,255 @@ > > > > +// SPDX-License-Identifier: GPL-2.0 > > > > +/* > > > > + * virtio_pmem.c: Virtio pmem Driver > > > > + * > > > > + * Discovers persistent memory range information > > > > + * from host and provides a virtio based flushing > > > > + * interface. > > > > + */ > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > +#include > > > > + > > > > +struct virtio_pmem_request { > > > > + /* Host return status corresponding to flush request */ > > > > + int ret; > > > > + > > > > + /* command name*/ > > > > + char name[16]; > > > > + > > > > + /* Wait queue to process deferred work after ack from host */ > > > > + wait_queue_head_t host_acked; > > > > + bool done; > > > > + > > > > + /* Wait queue to process deferred work after virt queue buffer avail > > > > */ > > > > + wait_queue_head_t wq_buf; > > > > + bool wq_buf_avail; > > > > + struct list_head list; > > > > +}; > > > > + > > > > +struct virtio_pmem { > > > > + struct virtio_device *vdev; > > > > + > > > > + /* Virtio pmem request queue */ > > > > + struct virtqueue *req_vq; > > > > + > > > > + /* nvdimm bus registers virtio pmem device */ > > > > + struct nvdimm_bus *nvdimm_bus; > > > > + struct nvdimm_bus_descriptor nd_desc; > > > > + > > > > + /* List to store deferred work if virtqueue is full */ > > > > + struct list_head req_list; > > > > + > > > > + /* Synchronize virtqueue data */ > > > > + spinlock_t pmem_lock; > > > > + > > > > + /* Memory region information */ > > > > + uint64_t start; > > > > + uint64_t size; > > > > +}; > > > > + > > > > +static struct virtio_device_id id_table[] = { > > > > + { VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID }, > > > > + { 0 }, > > > > +}; > > > > + > > > > + /* The interrupt handler */ > > > > +static void host_ack(struct virtqueue *vq) > > > > +{ > > > > + unsigned int len; > > > > + unsigned long flags; > > > > + struct virtio_pmem_request *req, *req_buf; > > > > + struct virtio_pmem *vpmem = vq->vdev->priv; > > > > + > > > > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > > > > + while ((req = virtqueue_get_buf(vq, &len)) != NULL) { > > > > + req->done = true; > > > > + wake_up(&req->host_acked); > > > > + > > > > + if (!list_empty(&vpmem->req_list)) { > > > > + req_buf = list_first_entry(&vpmem->req_list, > > > > + struct virtio_pmem_request, list); > > > > + list_del(&vpmem->req_list); > > > > + req_buf->wq_buf_avail = true; > > > > + wake_up(&req_buf->wq_buf); > > > > + } > > > > + } > > > > + spin_unlock_irqrestore(&vpmem->pmem_lock, flags); > > > > +} > > > > + /* Initialize virt queue */ > > > > +static int init_vq(struct virtio_pmem *vpmem) > > > > +{ > > > > + struct virtqueue *vq; > > > > + > > > > + /* single vq */ > > > > + vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev, > > > > + host_ack, "flush_queue"); > > > > + if (IS_ERR(vq)) > > > > + return PTR_ERR(vq); > > > > + > > > > + spin_lock_init(&vpmem->pmem_lock); > > > > + INIT_LIST_HEAD(&vpmem->req_list); > > > > + > > > > + return 0; > > > > +}; > > > > + > > > > + /* The request submission function */ > > > > +static int virtio_pmem_flush(struct nd_region *nd_region) > > > > +{ > > > > + int err; > > > > + unsigned long flags; > > > > + struct scatterlist *sgs[2], sg, ret; > > > > + struct virtio_device *vdev = > > > > + dev_to_virtio(nd_region->dev.parent->parent); > > > > + struct virtio_pmem *vpmem = vdev->priv; > > > > > > I'm missing a might_sleep() call in this function. > > > > I am not sure if we need might_sleep here? > > We can add it as debugging aid for detecting any problems > > in sleeping from acquired atomic context? > > Yes. Since this function sleeps and since some functions that > may run in atomic context call it, it's a good idea to > call might_sleep(). o.k Will add might_sleep. > > > > > + struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL); > > > > + > > > > + if (!req) > > > > + return -ENOMEM; > > > > + > > > > + req->done = req->wq_buf_avail = false; > > > > + strcpy(req->name, "FLUSH"); > > > > + init_waitqueue_head(&req->host_acked); > > > > + init_waitqueue_head(&req->wq_buf); > > > > + > > > > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > > > > + sg_init_one(&sg, req->name, strlen(req->name)); > > > > + sgs[0] = &sg; > > > > + sg_init_one(&ret, &req->ret, sizeof(req->ret)); > > > > + sgs[1] = &ret; > > > > > > It seems that sg_init_one() is only setting fields, in this > > > case you can move spin_lock_irqsave() here. > > > > yes, will move spin_lock_irqsave here. > > > > > > > > > + err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC); > > > > + if (err) { > > > > + dev_err(&vdev->dev, "failed to send command to virtio pmem > > > > device\n"); > > > > + > > > > + list_add_tail(&vpmem->req_list, &req->list); > > > > + spin_unlock_irqrestore(&vpmem->pmem_lock, flags); > > > > + > > > > + /* When host has read buffer, this completes via host_ack */ > > > > + wait_event(req->wq_buf, req->wq_buf_avail); > > > > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > > > > > > Is this error handling code assuming that at some point > > > virtqueue_add_sgs() will succeed for a different thread? If yes, > > > what happens if the assumption is false? That is, what happens if > > > virtqueue_add_sgs() never succeeds anymore? > > > > virtqueue_add_sgs will not succeed and corresponding thread should wait. > > All subsequent calling threads should also wait. As soon as there is first > > available free entry(from host), first waiting thread is acknowledged. > > > > In worst case if Qemu is not utilizing any of the used buffer will keep > > multiple threads waiting. > > > > > > > > Why not just return an error? > > > > As per suggestion by Stefan in previous discussion: if the virtqueue is > > full. > > Printing a message and failing the flush isn't appropriate. This thread > > needs to > > wait until virtqueue space becomes available. > > If virtqueue_add_sgs() is guaranteed to succeed at some point then OK. > Otherwise, you'll get threads getting stuck forever. We are handling here 'virtqueue_add_sgs' failure when virtqueue is full. For regular virtqueue full case, guest threads should wait. This scales for more number of fsync requests than current virtqueue size and avoids returning failure to userspace. Even if we return error when qemu threads are stuck, every time we return error unless threads actually progress and free an entry in virtqueue. > > > > > + } > > > > + virtqueue_kick(vpmem->req_vq); > > > > + spin_unlock_irqrestore(&vpmem->pmem_lock, flags); > > > > + > > > > + /* When host has read buffer, this completes via host_ack */ > > > > + wait_event(req->host_acked, req->done); > > > > + err = req->ret; > > > > > > If I'm understanding the QEMU code correctly, you're returning EIO > > > from QEMU if fsync() fails. I think this is wrong, since we don't know > > > if EIO in QEMU will be the same EIO in the guest. One way to solve this > > > would be to return 0 for success and 1 for failure from QEMU, and let the > > > guest implementation pick its error code (for your implementation it > > > could be EIO). > > > > Makes sense, will change this. > > > > Thanks, > > Pankaj > > > > > > > + kfree(req); > > > > + > > > > + return err; > > > > +}; > > > > +EXPORT_SYMBOL_GPL(virtio_pmem_flush); > > > > + > > > > +static int virtio_pmem_probe(struct virtio_device *vdev) > > > > +{ > > > > + int err = 0; > > > > + struct resource res; > > > > + struct virtio_pmem *vpmem; > > > > + struct nvdimm_bus *nvdimm_bus; > > > > + struct nd_region_desc ndr_desc; > > > > + int nid = dev_to_node(&vdev->dev); > > > > + struct nd_region *nd_region; > > > > + > > > > + if (!vdev->config->get) { > > > > + dev_err(&vdev->dev, "%s failure: config disabled\n", > > > > + __func__); > > > > + return -EINVAL; > > > > + } > > > > + > > > > + vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem), > > > > + GFP_KERNEL); > > > > + if (!vpmem) { > > > > + err = -ENOMEM; > > > > + goto out_err; > > > > + } > > > > + > > > > + vpmem->vdev = vdev; > > > > + err = init_vq(vpmem); > > > > + if (err) > > > > + goto out_err; > > > > + > > > > + virtio_cread(vpmem->vdev, struct virtio_pmem_config, > > > > + start, &vpmem->start); > > > > + virtio_cread(vpmem->vdev, struct virtio_pmem_config, > > > > + size, &vpmem->size); > > > > + > > > > + res.start = vpmem->start; > > > > + res.end = vpmem->start + vpmem->size-1; > > > > + vpmem->nd_desc.provider_name = "virtio-pmem"; > > > > + vpmem->nd_desc.module = THIS_MODULE; > > > > + > > > > + vpmem->nvdimm_bus = nvdimm_bus = nvdimm_bus_register(&vdev->dev, > > > > + &vpmem->nd_desc); > > > > + if (!nvdimm_bus) > > > > + goto out_vq; > > > > + > > > > + dev_set_drvdata(&vdev->dev, nvdimm_bus); > > > > + memset(&ndr_desc, 0, sizeof(ndr_desc)); > > > > + > > > > + ndr_desc.res = &res; > > > > + ndr_desc.numa_node = nid; > > > > + ndr_desc.flush = virtio_pmem_flush; > > > > + set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); > > > > + nd_region = nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc); > > > > + > > > > + if (!nd_region) > > > > + goto out_nd; > > > > + > > > > + //virtio_device_ready(vdev); > > > > + return 0; > > > > +out_nd: > > > > + err = -ENXIO; > > > > + nvdimm_bus_unregister(nvdimm_bus); > > > > +out_vq: > > > > + vdev->config->del_vqs(vdev); > > > > +out_err: > > > > + dev_err(&vdev->dev, "failed to register virtio pmem memory\n"); > > > > + return err; > > > > +} > > > > + > > > > +static void virtio_pmem_remove(struct virtio_device *vdev) > > > > +{ > > > > + struct virtio_pmem *vpmem = vdev->priv; > > > > + struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev); > > > > + > > > > + nvdimm_bus_unregister(nvdimm_bus); > > > > + vdev->config->del_vqs(vdev); > > > > + kfree(vpmem); > > > > +} > > > > + > > > > +#ifdef CONFIG_PM_SLEEP > > > > +static int virtio_pmem_freeze(struct virtio_device *vdev) > > > > +{ > > > > + /* todo: handle freeze function */ > > > > + return -EPERM; > > > > +} > > > > + > > > > +static int virtio_pmem_restore(struct virtio_device *vdev) > > > > +{ > > > > + /* todo: handle restore function */ > > > > + return -EPERM; > > > > +} > > > > +#endif > > > > + > > > > + > > > > +static struct virtio_driver virtio_pmem_driver = { > > > > + .driver.name = KBUILD_MODNAME, > > > > + .driver.owner = THIS_MODULE, > > > > + .id_table = id_table, > > > > + .probe = virtio_pmem_probe, > > > > + .remove = virtio_pmem_remove, > > > > +#ifdef CONFIG_PM_SLEEP > > > > + .freeze = virtio_pmem_freeze, > > > > + .restore = virtio_pmem_restore, > > > > +#endif > > > > +}; > > > > + > > > > +module_virtio_driver(virtio_pmem_driver); > > > > +MODULE_DEVICE_TABLE(virtio, id_table); > > > > +MODULE_DESCRIPTION("Virtio pmem driver"); > > > > +MODULE_LICENSE("GPL"); > > > > diff --git a/include/uapi/linux/virtio_ids.h > > > > b/include/uapi/linux/virtio_ids.h > > > > index 6d5c3b2..3463895 100644 > > > > --- a/include/uapi/linux/virtio_ids.h > > > > +++ b/include/uapi/linux/virtio_ids.h > > > > @@ -43,5 +43,6 @@ > > > > #define VIRTIO_ID_INPUT 18 /* virtio input */ > > > > #define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */ > > > > #define VIRTIO_ID_CRYPTO 20 /* virtio crypto */ > > > > +#define VIRTIO_ID_PMEM 25 /* virtio pmem */ > > > > > > > > #endif /* _LINUX_VIRTIO_IDS_H */ > > > > diff --git a/include/uapi/linux/virtio_pmem.h > > > > b/include/uapi/linux/virtio_pmem.h > > > > new file mode 100644 > > > > index 0000000..c7c22a5 > > > > --- /dev/null > > > > +++ b/include/uapi/linux/virtio_pmem.h > > > > @@ -0,0 +1,40 @@ > > > > +/* SPDX-License-Identifier: GPL-2.0 */ > > > > +/* > > > > + * This header, excluding the #ifdef __KERNEL__ part, is BSD licensed > > > > so > > > > + * anyone can use the definitions to implement compatible > > > > drivers/servers: > > > > + * > > > > + * > > > > + * Redistribution and use in source and binary forms, with or without > > > > + * modification, are permitted provided that the following conditions > > > > + * are met: > > > > + * 1. Redistributions of source code must retain the above copyright > > > > + * notice, this list of conditions and the following disclaimer. > > > > + * 2. Redistributions in binary form must reproduce the above > > > > copyright > > > > + * notice, this list of conditions and the following disclaimer in > > > > the > > > > + * documentation and/or other materials provided with the > > > > distribution. > > > > + * 3. Neither the name of IBM nor the names of its contributors > > > > + * may be used to endorse or promote products derived from this > > > > software > > > > + * without specific prior written permission. > > > > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > > > > ``AS IS'' > > > > + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED > > > > TO, > > > > THE > > > > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR > > > > PURPOSE > > > > + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE > > > > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR > > > > CONSEQUENTIAL > > > > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE > > > > GOODS > > > > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS > > > > INTERRUPTION) > > > > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, > > > > STRICT > > > > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN > > > > ANY > > > > WAY > > > > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY > > > > OF > > > > + * SUCH DAMAGE. > > > > + * > > > > + * Copyright (C) Red Hat, Inc., 2018-2019 > > > > + * Copyright (C) Pankaj Gupta , 2018 > > > > + */ > > > > +#ifndef _UAPI_LINUX_VIRTIO_PMEM_H > > > > +#define _UAPI_LINUX_VIRTIO_PMEM_H > > > > + > > > > +struct virtio_pmem_config { > > > > + __le64 start; > > > > + __le64 size; > > > > +}; > > > > +#endif > > > > > > > > > > > > >