Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp967942yba; Fri, 26 Apr 2019 11:44:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqyR0p7RaQv9pPFkjBfepgOynBj4qdrh77KbqCgv87O5K9d0DYMs2obB/R2kcynGGIjqaHH7 X-Received: by 2002:a63:4714:: with SMTP id u20mr36418056pga.316.1556304267864; Fri, 26 Apr 2019 11:44:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556304267; cv=none; d=google.com; s=arc-20160816; b=CebKIYdS0nifaPPAxApLZZTiBVMaIn4Wvq1MNhjOjaDxNH5Q1pIt9uJJHW0ImuTX/F DvK/pxzvzr13oyk7UpTnmRD5X4BhOEK2SWjwhsUIkVltUv7QEzwUlLba5b+8IgBFKzvn 6d17ygqz8x55ATwRjTInC5VAt7j+zAQZLEdZf4zeE3lirzEczPtPhO91eAMxQmbSIAYz FrT9zkbJ313aYYD5lWjbxkd/O7HjwT7dXQYt3PCFtcnyBesb4uqSBNLHah4bHx56H+eG gyYAe1pAFE7y/THawluX6pYTBblfFNQcflJZ/gV8P3X8aYDBQRjb/jwQe/L4qjKO91oI DpWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version; bh=h+WijXhsv6/Jv9J1vWUI/spfYQQ6sFmzaS3I18sGJBE=; b=NeNv+7wSIRKBA9Udc3DSeFX1fS7IXWPKHXPedpsvn86JnvKzh6BMmxCT1eYI0IlXgz VdkiSM/j55XrwD7bYAPXLvVXTWaOnLr0IW3FsmuOZL67JnU+xhu83NxYW/oTugW3ftaQ npYq0erYPPfDNOcXaECKmKTN4i9XuSORNAEfnq+GpIgJhe9oUb7v0mR6d4fRnCFeRj+L xwVC5JfydlH7jIaUSxISm9ZBFZ9UeYzP6uCDfa42cghi1IwolHA0jA3a+1D3kx2OVsao 70OdQSsJ3ALPeiESsg3p2yAyKMUFIKC6OGyksHQDIl/I68SvAw9rhi4ooQptfjWQ3QwZ D8VQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a6si10942868pfi.72.2019.04.26.11.44.12; Fri, 26 Apr 2019 11:44:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726353AbfDZSnB (ORCPT + 99 others); Fri, 26 Apr 2019 14:43:01 -0400 Received: from mail-qt1-f195.google.com ([209.85.160.195]:46083 "EHLO mail-qt1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726124AbfDZSnB (ORCPT ); Fri, 26 Apr 2019 14:43:01 -0400 Received: by mail-qt1-f195.google.com with SMTP id w26so5170756qto.13; Fri, 26 Apr 2019 11:43:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=h+WijXhsv6/Jv9J1vWUI/spfYQQ6sFmzaS3I18sGJBE=; b=oA+SR1lNmMDVPJLDHVZWMVBYdrN12gL6MblyLUwdiJsFfqR5F6A44CJ+iicBJclLtK LV5C/AfymhisPiKayjjE/1Wu76IIgCoRMRuhtU+bF18lK7RQfNqGkdHzsmBCcwkb2yPZ Rjnvz7Ujs9WIKxYrWXDqQ6mBUAl+eS0PgX/BYLNssCUZ2jXxMQrlo1kWVlTdfjuPuEX3 Hdk0VnCKI9GFIQnui7wubtFvuNnUqWoNNA1RADxwhDdesMI4H7rndTJiivTTNj954H/n FHKMRM4BUbXpswA5neQAlXn3DSLPBbWTUZj/aJ0rGXegvAltFj/aOlEwdEASVjhJenFd /ZrA== X-Gm-Message-State: APjAAAXQ00cVwRgm1REnEcRDYb8y9S54OA+RwDt+N24a93wfs4r12Fjx HAvStQLRsDyEfORaGXaa7QCWZgfBneEf3D3T1Ek= X-Received: by 2002:ac8:8cd:: with SMTP id y13mr26650331qth.96.1556304180530; Fri, 26 Apr 2019 11:43:00 -0700 (PDT) MIME-Version: 1.0 References: <20190423001348.GA31639@guoren-Inspiron-7460> <20190423055548.GA12365@lst.de> <20190423154642.GA16001@guoren-Inspiron-7460> <20190424020803.GA27332@guoren-Inspiron-7460> <20190424055703.GA3417@guoren-Inspiron-7460> <4e6b0816-3fe9-8c0b-a749-f7f6ef7e5742@garyguo.net> <20190424142306.GB20974@lst.de> <20190426160558.GA10540@guoren-Inspiron-7460> In-Reply-To: <20190426160558.GA10540@guoren-Inspiron-7460> From: Arnd Bergmann Date: Fri, 26 Apr 2019 20:42:42 +0200 Message-ID: Subject: Re: [PATCH] riscv: Support non-coherency memory model To: Guo Ren Cc: Christoph Hellwig , Gary Guo , "linux-arch@vger.kernel.org" , Palmer Dabbelt , Andrew Waterman , Anup Patel , Xiang Xiaoyan , "linux-kernel@vger.kernel.org" , Mike Rapoport , Vincent Chen , Greentime Hu , "ren_guo@c-sky.com" , "linux-riscv@lists.infradead.org" , Marek Szyprowski , Robin Murphy , Scott Wood , "tech-privileged@lists.riscv.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 26, 2019 at 6:06 PM Guo Ren wrote: > On Thu, Apr 25, 2019 at 11:50:11AM +0200, Arnd Bergmann wrote: > > On Wed, Apr 24, 2019 at 4:23 PM Christoph Hellwig wrote: > > > > You could probably get away with allowing uncached mappings only > > for huge pages, and using one or two of the bits the PMD for it. > > This should cover most use cases, since in practice coherent allocations > > tend to be either small and rare (device descriptors) or very big > > (frame buffer etc), and both cases can be handled with hugepages > > and gen_pool_alloc, possibly CMA added in since there will likely > > not be an IOMMU either on the systems that lack cache coherent DMA. > > Generally attributs in huge-tlb-entry and leaf-tlb-entry should be the > same. Only put _PAGE_CACHE and _PAGE_BUF bits in huge-tlb-entry sounds > a bit strange. Well, the point is that we can't really change the meaning of the existing low bits, but because of the alignment contraints on hugepages, the extra bits are currently unused for hugepage TLBs. There are other architectures that reuse the bits in clever ways, e.g. allowing larger physical address ranges to be used with hugepages than normal pages. > The gen_pool_alloc only 256KB by default, but a huge tlb entry is 4MB. > Hardware couldn't setup vitual-4MB to a phys-256KB range mapping in TLB. I expect the size would be easily changed, as long as there is sufficient physical memory. If the entire system has 32MB or less, setting 2MB aside would have a fairly significant impact of course. > > - you need to decide what is supposed to happen when there are > > multiple conflicting mappings for the same physical address. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > What's the mulitple confilcing mappings ? I mean when you have the linear mapping as cacheable and another mapping for the same physical page as uncacheable, and then access virtual address in both. This is usually a bad idea, but architectures go to different lengths to prevent it. The safest way would be for the CPU to produce a checkstop as soon as there are TLB entries for the same physical address but different caching settings. You can also do that if you have a cache-bypassing load/store that hits a live cache line. The other extreme would be to not do anything special and try to come up with sane behavior, e.g. allow accesses in both ways but ensure that a cache-bypassing load/store always flushes and invalidates cache lines for the same physical address before its access. Arnd