Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp5977127ybe; Tue, 10 Sep 2019 11:37:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqxSCDHskhbOTztonjK7RXNhGSqesWvBTwLLk/aQEpoSOYyYwGT9/N5wKWM24cYf37DhwRZ/ X-Received: by 2002:a17:906:b298:: with SMTP id q24mr22062460ejz.93.1568140640842; Tue, 10 Sep 2019 11:37:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568140640; cv=none; d=google.com; s=arc-20160816; b=MLqDS1frTdEiCz65yLn5YQ71A3aOqj8hIcvm8vyBRH14ROIGMcVm/W+m+EHCfCLK5N m0bn+dVC0F38ZolEqG8dtNJOXVJgBdyvrxvAGbwZX3BL+TP6b9QxgjTYnUAg0HrzPPNL ooh4IMVxkqIohVeBJz94hF7wnATuhzWRMZSGbW+7EqNoNLqLGxQs+QkY6mI+Bh7+3yCA VwFVolMB+8K6nJ2TUL8BZsb3L/wZ227yUvbNLf883M8MzcmReA2XE9KaUNFSibjyFgGx ymZ7a4tKkBuSekrGKeNo4Wz7OUJF6HCisk1vXGUeGCqsC7gmpSB6g+dYHx/YzfaeRd0t S4JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=jqdidbzPeA/sgt3s+DYxvSvIwPPs0smJm/TWV8LV8WA=; b=GPvS/fVzi641jLDoHigrdANKRR5Yz2nhcoFnMG/FGSo1UWBa4vROEkapofxrLHNtKk f0XRYF40ZOZFKNjCOjLNNE+JaHUVjVzryOjOraO8uO8O65S+OxEY7tsUZNQ92gd4i6y5 zCOWe3NC/qW5MTNtwLQeD5v7u7+DyM3lNBjSfLlD0qx2jN1iJrH3RMbi3lO0RwBLB1OS TU8dxs8S3E/uYixpN2CkL5WPv4e5gZZmJZDnv5tCrFU+p/m6V75u4vDLn446eTfmBQwJ euWk1juDINq1P0N6jCHyg6G5HmuFddcPnyJe7x0itTJkuwXv7tgnYdCTXBmsEMT6A5Et ut8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=H3OftagE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a6si9553809ejy.96.2019.09.10.11.36.56; Tue, 10 Sep 2019 11:37:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=H3OftagE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387916AbfIIRMZ (ORCPT + 99 others); Mon, 9 Sep 2019 13:12:25 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:36040 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728958AbfIIRMZ (ORCPT ); Mon, 9 Sep 2019 13:12:25 -0400 Received: by mail-wr1-f67.google.com with SMTP id y19so14783583wrd.3 for ; Mon, 09 Sep 2019 10:12:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=jqdidbzPeA/sgt3s+DYxvSvIwPPs0smJm/TWV8LV8WA=; b=H3OftagEfFDm375McT0XbKcLTDfqJHbzZnQHwb0rA1h8doy/xHjSC9X0c5D+htCHWG d3hFKxdBvz0XJR1VYgPZ71ZxU6vs2svcVObFBW9pjRVLQne3RWi9X2Wf+RbRqwNl2cai G0mBpLKVsMczLpBRdH2Ud7cOoSVlZnkaCnsN4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=jqdidbzPeA/sgt3s+DYxvSvIwPPs0smJm/TWV8LV8WA=; b=NbLkEVlLBtfwgXFZGn8oD0ZapQpQ4i28Mn6sOHQFUH4Vb9vCM/RHPecDwlQShAP3L8 4j14xotCO7vjImM+OvZzsSUPpj0du3t3Zh5kYdjLem+SPX0gjF0WpEokI4RNqqN9Z1mF sjv1hCuj9CAiqlN33eitEhxPMCEcsD/Sxh+XGzxKAMjBD6jGRtkmgV6C5lT7u5yBUlHN aqu2yoMLwMzH73CEq+XFimxrGCGX43fxF2gw2bCLrVVCRr42lhksVNMpPzEcDIOoPGY9 qvVglQmRFqjmzDE/Yu2M0zAYtcqjbn62L12icWbeBsoCKSxeYyMwJRDuvFBL3MitURuX FJgQ== X-Gm-Message-State: APjAAAVPpijh+WlhGR33+X8EgXmh9jJrpmZiI9cgA/Wz/4NLL7Cz++8E iWZtpnK/dpune/HzuuKAxryk02+3sOEZNjR4vlUrNw== X-Received: by 2002:adf:fa10:: with SMTP id m16mr14700174wrr.322.1568049140887; Mon, 09 Sep 2019 10:12:20 -0700 (PDT) MIME-Version: 1.0 References: <20190829212417.257397-1-davidriley@chromium.org> <20190905220008.75488-1-davidriley@chromium.org> <20190906051847.75mj4772nqwdper6@sirius.home.kraxel.org> In-Reply-To: <20190906051847.75mj4772nqwdper6@sirius.home.kraxel.org> From: David Riley Date: Mon, 9 Sep 2019 10:12:09 -0700 Message-ID: Subject: Re: [PATCH v2] drm/virtio: Use vmalloc for command buffer allocations. To: Gerd Hoffmann Cc: dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, David Airlie , Daniel Vetter , Gurchetan Singh , =?UTF-8?Q?St=C3=A9phane_Marchesin?= , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 5, 2019 at 10:18 PM Gerd Hoffmann wrote: > > > +/* How many bytes left in this page. */ > > +static unsigned int rest_of_page(void *data) > > +{ > > + return PAGE_SIZE - offset_in_page(data); > > +} > > Not needed. > > > +/* Create sg_table from a vmalloc'd buffer. */ > > +static struct sg_table *vmalloc_to_sgt(char *data, uint32_t size, int *sg_ents) > > +{ > > + int nents, ret, s, i; > > + struct sg_table *sgt; > > + struct scatterlist *sg; > > + struct page *pg; > > + > > + *sg_ents = 0; > > + > > + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); > > + if (!sgt) > > + return NULL; > > + > > + nents = DIV_ROUND_UP(size, PAGE_SIZE) + 1; > > Why +1? This is part of handling offsets within the vmalloc buffer and to maintain parity with the !is_vmalloc_addr/existing case (sg_init_one handles offsets within pages internally). I had left it in because this is being used for all sg/descriptor generation and I wasn't sure if someone in the future might do something like: buf = vmemdup_user() offset = find_interesting(buf) queue(buf + offset) To respond specifically to your question, if we handle offsets, a vmalloc_to_sgt(size = PAGE_SIZE + 2) could end up with 3 sg_ents with the +1 being to account for that extra page. I'll just remove all support for offsets in v3 of the patch and comment that functionality will be different based on where the buffer was originally allocated from. > > > + ret = sg_alloc_table(sgt, nents, GFP_KERNEL); > > + if (ret) { > > + kfree(sgt); > > + return NULL; > > + } > > + > > + for_each_sg(sgt->sgl, sg, nents, i) { > > + pg = vmalloc_to_page(data); > > + if (!pg) { > > + sg_free_table(sgt); > > + kfree(sgt); > > + return NULL; > > + } > > + > > + s = rest_of_page(data); > > + if (s > size) > > + s = size; > > vmalloc memory is page aligned, so: As per above, will remove with v3. > > s = min(PAGE_SIZE, size); > > > + sg_set_page(sg, pg, s, offset_in_page(data)); > > Offset is always zero. As per above, will remove with v3. > > > + > > + size -= s; > > + data += s; > > + *sg_ents += 1; > > sg_ents isn't used anywhere. It's used for outcnt. > > > + > > + if (size) { > > + sg_unmark_end(sg); > > + } else { > > + sg_mark_end(sg); > > + break; > > + } > > That looks a bit strange. I guess you need only one of the two because > the other is the default? I was being overly paranoid and not wanting to make assumptions about the initial state of the table. I'll simplify. > > > static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, > > struct virtio_gpu_vbuffer *vbuf, > > struct virtio_gpu_ctrl_hdr *hdr, > > struct virtio_gpu_fence *fence) > > { > > struct virtqueue *vq = vgdev->ctrlq.vq; > > + struct scatterlist *vout = NULL, sg; > > + struct sg_table *sgt = NULL; > > int rc; > > + int outcnt = 0; > > + > > + if (vbuf->data_size) { > > + if (is_vmalloc_addr(vbuf->data_buf)) { > > + sgt = vmalloc_to_sgt(vbuf->data_buf, vbuf->data_size, > > + &outcnt); > > + if (!sgt) > > + return -ENOMEM; > > + vout = sgt->sgl; > > + } else { > > + sg_init_one(&sg, vbuf->data_buf, vbuf->data_size); > > + vout = &sg; > > + outcnt = 1; > > outcnt must be set in both cases. outcnt is set by vmalloc_to_sgt. > > > +static int virtio_gpu_queue_ctrl_buffer(struct virtio_gpu_device *vgdev, > > + struct virtio_gpu_vbuffer *vbuf) > > +{ > > + return virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, NULL, NULL); > > +} > > Changing virtio_gpu_queue_ctrl_buffer to call > virtio_gpu_queue_fenced_ctrl_buffer should be done in a separate patch. Will do. Thanks, David