Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp6159918yba; Tue, 14 May 2019 02:47:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqww8hI3QgCSOR1l8IwANBuDO84/+areVSsETP1LKFyYaTfA9Gaz2ofGilnP1FsGMW+R5gIk X-Received: by 2002:a17:902:12f:: with SMTP id 44mr37051223plb.193.1557827271662; Tue, 14 May 2019 02:47:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557827271; cv=none; d=google.com; s=arc-20160816; b=iymDdHkYAqM6IPMw8bb7j2iWH/u8YkRuKc0EjnbmPDQcGmNibDAXtSKPuTtDdYYP2Q lx88NxYJg7t3CpRgHaaFsq7ROqGWmXMB3OOVnFHzStj/uv6AGwbMZG5VBuGGDYM8Cvvu U/IyoSkqouJsslHgeK4E+SVr8lDUynIfZgPFJKg+dNl1S5QHObVbfbDiViKMycoPUuCT ynoaaLlwWWbyZ7lWjVO0X2GxAghwfTNo5MR6nEzfhAXyVqXN+Evz8Le5cjqKqLGkSDYL 5aj/pgG7s3VHQ++beLF/q+DMcp/h8EyWIL+uuZVRMO1yCxnQcR40dkiFpi4B07WgftXc KVYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date; bh=t1o8CLwl5uOsG5GITkrA3byDWFt+99bGZoYMHYBX5JM=; b=g2qNC+ghSFwbz6Fv/wbKdGsKqOKNDYbbW86MvmnJw4i8ph+B/lJN3ePp2tK170xGqC dVcCaSxZmKOTGs4orNW2eOYA3+bF0AIm7MPHgWRxnTsXraXbf3+yFrBdLRNBGJEsGY31 pcaXD7L785WsKbG7ZxPLYPvNFKhNiryUplwrN5rllLo668kfa+e9N0WNYcb+242NLsll wt0G6ZBa3OZwgXeogZJsJWaw1+D/ZaO0XdPPJe9PwfL/hPLf+jqKRDTZc4LMrB/6+fcq V/2IztzaI7Lzs+SeDDobpO7v7FK+0f1bjATRnOK1XG9HkVRJ+5IiQwL6rAKV/1X1n9TH xmEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u28si18079408pfl.138.2019.05.14.02.47.33; Tue, 14 May 2019 02:47:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726272AbfENJrY (ORCPT + 99 others); Tue, 14 May 2019 05:47:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45652 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726130AbfENJrY (ORCPT ); Tue, 14 May 2019 05:47:24 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2D76F308427C; Tue, 14 May 2019 09:47:23 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0CA275D6A6; Tue, 14 May 2019 09:47:23 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id BDAEA18089CC; Tue, 14 May 2019 09:47:22 +0000 (UTC) Date: Tue, 14 May 2019 05:47:22 -0400 (EDT) From: Pankaj Gupta To: David Hildenbrand Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-acpi@vger.kernel.org, qemu-devel@nongnu.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, dan j williams , zwisler@kernel.org, vishal l verma , dave jiang , mst@redhat.com, jasowang@redhat.com, willy@infradead.org, rjw@rjwysocki.net, hch@infradead.org, lenb@kernel.org, jack@suse.cz, tytso@mit.edu, adilger kernel , darrick wong , lcapitulino@redhat.com, kwolf@redhat.com, imammedo@redhat.com, jmoyer@redhat.com, nilal@redhat.com, riel@surriel.com, stefanha@redhat.com, aarcange@redhat.com, david@fromorbit.com, cohuck@redhat.com, xiaoguangrong eric , pbonzini@redhat.com, kilobyte@angband.pl, yuval shaia , jstaron@google.com Message-ID: <712871093.28555872.1557827242385.JavaMail.zimbra@redhat.com> In-Reply-To: <86298c2c-cc7c-5b97-0f11-335d7da8c450@redhat.com> References: <20190510155202.14737-1-pagupta@redhat.com> <20190510155202.14737-3-pagupta@redhat.com> <752392764.28554139.1557826022323.JavaMail.zimbra@redhat.com> <86298c2c-cc7c-5b97-0f11-335d7da8c450@redhat.com> Subject: Re: [PATCH v8 2/6] virtio-pmem: Add virtio pmem driver MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.65.16.148, 10.4.195.13] Thread-Topic: virtio-pmem: Add virtio pmem driver Thread-Index: 8wNBImudyN2CQ1nMcWyCrDq+wnVDqg== X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Tue, 14 May 2019 09:47:23 +0000 (UTC) Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org > > >> > >>> + } > >>> + > >>> + /* When host has read buffer, this completes via host_ack */ > >> > >> "A host repsonse results in "host_ack" getting called" ... ? > >> > >>> + wait_event(req->host_acked, req->done); > >>> + err = req->ret; > >>> +ret: > >>> + kfree(req); > >>> + return err; > >>> +}; > >>> + > >>> +/* The asynchronous flush callback function */ > >>> +int async_pmem_flush(struct nd_region *nd_region, struct bio *bio) > >>> +{ > >>> + int rc = 0; > >>> + > >>> + /* Create child bio for asynchronous flush and chain with > >>> + * parent bio. Otherwise directly call nd_region flush. > >>> + */ > >>> + if (bio && bio->bi_iter.bi_sector != -1) { > >>> + struct bio *child = bio_alloc(GFP_ATOMIC, 0); > >>> + > >>> + if (!child) > >>> + return -ENOMEM; > >>> + bio_copy_dev(child, bio); > >>> + child->bi_opf = REQ_PREFLUSH; > >>> + child->bi_iter.bi_sector = -1; > >>> + bio_chain(child, bio); > >>> + submit_bio(child); > >> > >> return 0; > >> > >> Then, drop the "else" case and "int rc" and do directly > >> > >> if (virtio_pmem_flush(nd_region)) > >> return -EIO; > > > > and another 'return 0' here :) > > > > I don't like return from multiple places instead I prefer > > single exit point from function. > > Makes this function more complicated than necessary. I agree when there > are locks involved. o.k. I will change as you suggest :) > > > > >> > >>> + > >>> + return 0; > >>> +}; > >>> + > >>> +static int virtio_pmem_probe(struct virtio_device *vdev) > >>> +{ > >>> + int err = 0; > >>> + struct resource res; > >>> + struct virtio_pmem *vpmem; > >>> + struct nd_region_desc ndr_desc = {}; > >>> + int nid = dev_to_node(&vdev->dev); > >>> + struct nd_region *nd_region; > >> > >> Nit: use reverse Christmas tree layout :) > > > > Done. > > > >> > >>> + > >>> + if (!vdev->config->get) { > >>> + dev_err(&vdev->dev, "%s failure: config access disabled\n", > >>> + __func__); > >>> + return -EINVAL; > >>> + } > >>> + > >>> + vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem), GFP_KERNEL); > >>> + if (!vpmem) { > >>> + err = -ENOMEM; > >>> + goto out_err; > >>> + } > >>> + > >>> + vpmem->vdev = vdev; > >>> + vdev->priv = vpmem; > >>> + err = init_vq(vpmem); > >>> + if (err) > >>> + goto out_err; > >>> + > >>> + virtio_cread(vpmem->vdev, struct virtio_pmem_config, > >>> + start, &vpmem->start); > >>> + virtio_cread(vpmem->vdev, struct virtio_pmem_config, > >>> + size, &vpmem->size); > >>> + > >>> + res.start = vpmem->start; > >>> + res.end = vpmem->start + vpmem->size-1; > >>> + vpmem->nd_desc.provider_name = "virtio-pmem"; > >>> + vpmem->nd_desc.module = THIS_MODULE; > >>> + > >>> + vpmem->nvdimm_bus = nvdimm_bus_register(&vdev->dev, > >>> + &vpmem->nd_desc); > >>> + if (!vpmem->nvdimm_bus) > >>> + goto out_vq; > >>> + > >>> + dev_set_drvdata(&vdev->dev, vpmem->nvdimm_bus); > >>> + > >>> + ndr_desc.res = &res; > >>> + ndr_desc.numa_node = nid; > >>> + ndr_desc.flush = async_pmem_flush; > >>> + set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); > >>> + set_bit(ND_REGION_ASYNC, &ndr_desc.flags); > >>> + nd_region = nvdimm_pmem_region_create(vpmem->nvdimm_bus, &ndr_desc); > >>> + > >> > >> I'd drop this empty line. > > > > hmm. > > > > The common pattern after allocating something, immediately check for it > in the next line (like you do throughout this patch ;) ) Right. But rare times when I see space will beauty the code I tend to add it. Maybe I should not :) > > ... > >> You are not freeing "vdev->priv". > > > > vdev->priv is vpmem which is allocated using devm API. > > I'm confused. Looking at drivers/virtio/virtio_balloon.c: > > static void virtballoon_remove(struct virtio_device *vdev) > { > struct virtio_balloon *vb = vdev->priv; > > ... > > kfree(vb); > } > > I think you should do the same here, vdev->priv is allocated in > virtio_pmem_probe. > > But maybe I am missing something important here :) Because virtio_balloon use "kzalloc" for allocation and needs to be freed. But virtio pmem uses "devm_kzalloc" which takes care of automatically deleting the device memory when associated device is detached. Thanks, Pankaj > > >> > >>> + nvdimm_bus_unregister(nvdimm_bus); > >>> + vdev->config->del_vqs(vdev); > >>> + vdev->config->reset(vdev); > >>> +} > >>> + > > -- > > Thanks, > > David / dhildenb >