Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp20757yba; Wed, 3 Apr 2019 03:41:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqzyrYMHEVhF9px+OTBTshC/YMZbhNZ/1N0eVLsx0LG/J9gjf23fBe2+gaqN8dHyyq7DoKfs X-Received: by 2002:a63:5c53:: with SMTP id n19mr6459782pgm.193.1554288097338; Wed, 03 Apr 2019 03:41:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554288097; cv=none; d=google.com; s=arc-20160816; b=DHJPOKHVDk4n16Bpb1mO0Wly4WHB5fnOnsu5wi0rQezI6awGdV+U5VHbebb4U+JFYZ Hsbm0S1U/whAsZ3dma2OxdpwJJeM5wdEgwdTB8h/PKwpOSzrb5uUrXEUEPy2BwJnEydg Q8Qdz5SaSrsriMkSQm43t/69t9Q4ns1E2NRG6UCcqmq07ELqZZLjBhd4BaFkKL1OoV0H MGvFGaT+3k9D7x2oIEcb3K0gu2pthA83bhlXxT4k5fZIy0iklJuc8Yy7XXgD1oLt5Zcm 56Pr7Zg9JPzQdHQTI1zMOSCs69AOH66HIcRbKgkssxg4d5q8DIk2lU57phJ9f5ghlgVM DeMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=BDF+/FUq+341g5/x/Vi1lBcaXk9XZ2XwVPyicQHm03w=; b=VmRjNYmDtL9QTqaZgMNoD0g5ULZqDyEZvIM6Oj9bkUmnLQU7EsU9cz3VxuGJDYHT5h B8ux+pbiRRWG6rc6Wf+R44PiJjTbFnw8L79boq1bTRo0NyLu7lFOqAXhH07xIBrVjzUK +/5UGvz60CKPC1oipVeK4R5wHXrm1UEMQrsBSePLNux2VokEqrmssQateGM7JKoHI6A5 0rP9i7j0CK6JdYAG1k3i8K/KGp3x4lS4mWPsdwU6lOj5RnI4vTigfAD5jsDTvum173nB 4swuhaLFuV+VQem0RMj5nJ1d0NtS8RK3jP1aWHQkSIqW+WevH5ju8ePZiL+nR6WnrG2M Lq5w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p6si13296396plo.4.2019.04.03.03.41.21; Wed, 03 Apr 2019 03:41:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726471AbfDCKko (ORCPT + 99 others); Wed, 3 Apr 2019 06:40:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55720 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726263AbfDCKko (ORCPT ); Wed, 3 Apr 2019 06:40:44 -0400 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4D5CA3082141; Wed, 3 Apr 2019 10:40:43 +0000 (UTC) Received: from dhcp201-121.englab.pnq.redhat.com (dhcp-10-65-161-59.pnq.redhat.com [10.65.161.59]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9E5C619731; Wed, 3 Apr 2019 10:40:21 +0000 (UTC) From: Pankaj Gupta To: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-acpi@vger.kernel.org, qemu-devel@nongnu.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org Cc: dan.j.williams@intel.com, zwisler@kernel.org, vishal.l.verma@intel.com, dave.jiang@intel.com, mst@redhat.com, jasowang@redhat.com, willy@infradead.org, rjw@rjwysocki.net, hch@infradead.org, lenb@kernel.org, jack@suse.cz, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com, lcapitulino@redhat.com, kwolf@redhat.com, imammedo@redhat.com, jmoyer@redhat.com, nilal@redhat.com, riel@surriel.com, stefanha@redhat.com, aarcange@redhat.com, david@redhat.com, david@fromorbit.com, cohuck@redhat.com, xiaoguangrong.eric@gmail.com, pagupta@redhat.com Subject: [PATCH v4 0/5] virtio pmem driver Date: Wed, 3 Apr 2019 16:10:13 +0530 Message-Id: <20190403104018.23947-1-pagupta@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Wed, 03 Apr 2019 10:40:43 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch series has implementation for "virtio pmem". "virtio pmem" is fake persistent memory(nvdimm) in guest which allows to bypass the guest page cache. This also implements a VIRTIO based asynchronous flush mechanism. Sharing guest kernel driver in this patchset with the changes suggested in v3. Tested with Qemu side device emulation [6] for virtio-pmem. We have incorporated all the suggestions in V3. Documented the impact of possible page cache side channel attacks with suggested countermeasures. Details of project idea for 'virtio pmem' flushing interface is shared [3] & [4]. Implementation is divided into two parts: New virtio pmem guest driver and qemu code changes for new virtio pmem paravirtualized device. 1. Guest virtio-pmem kernel driver --------------------------------- - Reads persistent memory range from paravirt device and registers with 'nvdimm_bus'. - 'nvdimm/pmem' driver uses this information to allocate persistent memory region and setup filesystem operations to the allocated memory. - virtio pmem driver implements asynchronous flushing interface to flush from guest to host. 2. Qemu virtio-pmem device --------------------------------- - Creates virtio pmem device and exposes a memory range to KVM guest. - At host side this is file backed memory which acts as persistent memory. - Qemu side flush uses aio thread pool API's and virtio for asynchronous guest multi request handling. David Hildenbrand CCed also posted a modified version[7] of qemu virtio-pmem code based on updated Qemu memory device API. Virtio-pmem security implications and countermeasures: ----------------------------------------------------- In previous posting of kernel driver, there was discussion [9] on possible implications of page cache side channel attacks with virtio pmem. After thorough analysis of details of known side channel attacks, below are the suggestions: - Depends entirely on how host backing image file is mapped into guest address space. - virtio-pmem device emulation, by default shared mapping is used to map host backing file. It is recommended to use separate backing file at host side for every guest. This will prevent any possibility of executing common code from multiple guests and any chance of inferring guest local data based based on execution time. - If backing file is required to be shared among multiple guests it is recommended to don't support host page cache eviction commands from the guest driver. This will avoid any possibility of inferring guest local data or host data from another guest. - Proposed device specification [8] for virtio-pmem device with details of possible security implications and suggested countermeasures for device emulation. Virtio-pmem errors handling: ---------------------------------------- Checked behaviour of virtio-pmem for below types of errors Need suggestions on expected behaviour for handling these errors? - Hardware Errors: Uncorrectable recoverable Errors: a] virtio-pmem: - As per current logic if error page belongs to Qemu process, host MCE handler isolates(hwpoison) that page and send SIGBUS. Qemu SIGBUS handler injects exception to KVM guest. - KVM guest then isolates the page and send SIGBUS to guest userspace process which has mapped the page. b] Existing implementation for ACPI pmem driver: - Handles such errors with MCE notifier and creates a list of bad blocks. Read/direct access DAX operation return EIO if accessed memory page fall in bad block list. - It also starts backgound scrubbing. - Similar functionality can be reused in virtio-pmem with MCE notifier but without scrubbing(no ACPI/ARS)? Need inputs to confirm if this behaviour is ok or needs any change? Changes from PATCH v3: [1] - Use generic dax_synchronous() helper to check for DAXDEV_SYNC flag - [Dan, Darrick, Jan] - Add 'is_nvdimm_async' function - Document page cache side channel attacks implications & countermeasures - [Dave Chinner, Michael] Changes from PATCH v2: [2] - Disable MAP_SYNC for ext4 & XFS filesystems - [Dan] - Use name 'virtio pmem' in place of 'fake dax' Changes from PATCH v1: - 0-day build test for build dependency on libnvdimm Changes suggested by - [Dan Williams] - Split the driver into two parts virtio & pmem - Move queuing of async block request to block layer - Add "sync" parameter in nvdimm_flush function - Use indirect call for nvdimm_flush - Don’t move declarations to common global header e.g nd.h - nvdimm_flush() return 0 or -EIO if it fails - Teach nsio_rw_bytes() that the flush can fail - Rename nvdimm_flush() to generic_nvdimm_flush() - Use 'nd_region->provider_data' for long dereferencing - Remove virtio_pmem_freeze/restore functions - Remove BSD license text with SPDX license text - Add might_sleep() in virtio_pmem_flush - [Luiz] - Make spin_lock_irqsave() narrow Changes from RFC v3 - Rebase to latest upstream - Luiz - Call ndregion->flush in place of nvdimm_flush- Luiz - kmalloc return check - Luiz - virtqueue full handling - Stefan - Don't map entire virtio_pmem_req to device - Stefan - request leak, correct sizeof req- Stefan - Move declaration to virtio_pmem.c Changes from RFC v2: - Add flush function in the nd_region in place of switching on a flag - Dan & Stefan - Add flush completion function with proper locking and wait for host side flush completion - Stefan & Dan - Keep userspace API in uapi header file - Stefan, MST - Use LE fields & New device id - MST - Indentation & spacing suggestions - MST & Eric - Remove extra header files & add licensing - Stefan Changes from RFC v1: - Reuse existing 'pmem' code for registering persistent memory and other operations instead of creating an entirely new block driver. - Use VIRTIO driver to register memory information with nvdimm_bus and create region_type accordingly. - Call VIRTIO flush from existing pmem driver. Pankaj Gupta (5): libnvdimm: nd_region flush callback support virtio-pmem: Add virtio-pmem guest driver libnvdimm: add nd_region buffered dax_dev flag ext4: disable map_sync for virtio pmem xfs: disable map_sync for virtio pmem [1] https://lkml.org/lkml/2019/1/9/471 [2] https://lkml.org/lkml/2018/10/13/117 [3] https://www.spinics.net/lists/kvm/msg149761.html [4] https://www.spinics.net/lists/kvm/msg153095.html [5] https://lkml.org/lkml/2018/8/31/413 [6] https://marc.info/?l=linux-kernel&m=153572228719237&w=2 [7] https://marc.info/?l=qemu-devel&m=153555721901824&w=2 [8] https://lists.oasis-open.org/archives/virtio-dev/201903/msg00083.html