Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2973779pxk; Sun, 4 Oct 2020 20:08:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyW8oCihTLF9M9tkzl+0zTJO83FNkEyA4+jlbal+8WGyoRjzlncf+psnfbzM/jyYDO5Gbj8 X-Received: by 2002:a50:cdd1:: with SMTP id h17mr15199371edj.94.1601867335727; Sun, 04 Oct 2020 20:08:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601867335; cv=none; d=google.com; s=arc-20160816; b=o1qd7pHWC8e4I3eJfn/6X9E9LD4XFstX7HKCUQqlSdIuXoGdJi0p9hd4f3P1sxpx1H 7edIh9tk8bGWbcYmxEVOFw8bbAYA49iY1CVyAftYMs+xndc8aeO8NUztwxl+sHTXf7Rm QXdyzzM4T6ABSV/51X7b/eidyJt9qXSHAIZzzQ8ECQnz9u7tCnunjcV/BWSdxqtQb7J/ 4gU7J8volgby3QGcUiRicMzrBDghYOikLTiImRfaO3ov6wJqFcmUVgiTELKC8YhcftSu 1ufvHN0TMvOppUOQ1PrbwZTu4dtgUowHKWnTdmDuMEElukEzclAqQT9GinE75wrSi402 kSOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:organization:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=REMMYfeRT8rPw4h50tLMyLla2HR1gqabW5O7/5lLfoQ=; b=DEqQVer2zwgSKRIQeFCN1TG4kT4vbmnrg1YcqcBIsK8wJ9mFQZl7OLAKrkudXgGeuy 3On5qDk1+dBlWVLORefbT0VGIYaFt7ysvaeJnOII9TK6rERyRlBRgIi4oKIBI6b9Goq1 1KAqeIJKRvUvLnVnmd1rXDbAQ1WjseAhSHzyhBBIBflnpe2Kx4Je3rVYY9xhNQCb+aio bRCsyXdFnaEx5hv1Xwahlwxajz7OOp3PkHuSnFLIBOoG3ABb4mZDU4aP59KIa/LSU984 7TJLVjIirjWXuBMbH6bnu1iZTLLyzWxsH15J7+uwcFXD2JipVaFEHSk6LUaQA99gzo5Y B3Ig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t5si6627991edi.523.2020.10.04.20.08.28; Sun, 04 Oct 2020 20:08:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725855AbgJEDGb (ORCPT + 99 others); Sun, 4 Oct 2020 23:06:31 -0400 Received: from mga02.intel.com ([134.134.136.20]:7648 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725841AbgJEDGa (ORCPT ); Sun, 4 Oct 2020 23:06:30 -0400 IronPort-SDR: k2SRoR5uFlmEoJ86Kk7TgAX/XFvuKuxCu2K6HbE/I+yTsBQmokFl4o0TF+RctFHUQLY6naNm1O ej6dEgwbV5Sw== X-IronPort-AV: E=McAfee;i="6000,8403,9764"; a="150949056" X-IronPort-AV: E=Sophos;i="5.77,337,1596524400"; d="scan'208";a="150949056" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2020 20:06:29 -0700 IronPort-SDR: 5NaMJ8KMuA5SAX+NZz4ygdetZ7eGAwZ6u7txBeQp4rS5Yiye3bU7BjuORBvq4iQfrKatMh8qVf 78Ud5EQMsiJg== X-IronPort-AV: E=Sophos;i="5.77,337,1596524400"; d="scan'208";a="523209843" Received: from sidorovd-mobl1.ccr.corp.intel.com (HELO localhost) ([10.252.48.68]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2020 20:06:21 -0700 Date: Mon, 5 Oct 2020 06:06:19 +0300 From: Jarkko Sakkinen To: Matthew Wilcox Cc: x86@kernel.org, linux-sgx@vger.kernel.org, linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Jethro Beekman , Haitao Huang , Chunyang Hui , Jordan Hand , Nathaniel McCallum , Seth Moore , Darren Kenny , Sean Christopherson , Suresh Siddha , andriy.shevchenko@linux.intel.com, asapek@google.com, bp@alien8.de, cedric.xing@intel.com, chenalexchen@google.com, conradparker@google.com, cyhanish@google.com, dave.hansen@intel.com, haitao.huang@intel.com, kai.huang@intel.com, kai.svahn@intel.com, kmoy@google.com, ludloff@google.com, luto@kernel.org, nhorman@redhat.com, puiterwijk@redhat.com, rientjes@google.com, tglx@linutronix.de, yaozhangx@google.com, mikko.ylinen@intel.com Subject: Re: [PATCH v39 11/24] x86/sgx: Add SGX enclave driver Message-ID: <20201005030619.GA126283@linux.intel.com> References: <20201003045059.665934-1-jarkko.sakkinen@linux.intel.com> <20201003045059.665934-12-jarkko.sakkinen@linux.intel.com> <20201003195440.GD20115@casper.infradead.org> <20201004215049.GA43926@linux.intel.com> <20201004222750.GI20115@casper.infradead.org> <20201004234153.GA49415@linux.intel.com> <20201005013053.GJ20115@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201005013053.GJ20115@casper.infradead.org> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 05, 2020 at 02:30:53AM +0100, Matthew Wilcox wrote: > > In my Geminilake NUC the maximum size of the address space is 64GB for > > an enclave, and it is not fixed but can grow in microarchitectures > > beyond that. > > > > That means that in (*artificial*) worst case the locks would be kept for > > 64*1024*1024*1024/4096 = 16777216 iterations. > > Oh, there's support for that on the XArray API too. > > xas_lock_irq(&xas); > xas_for_each_marked(&xas, page, end, PAGECACHE_TAG_DIRTY) { > xas_set_mark(&xas, PAGECACHE_TAG_TOWRITE); > if (++tagged % XA_CHECK_SCHED) > continue; > > xas_pause(&xas); > xas_unlock_irq(&xas); > cond_resched(); > xas_lock_irq(&xas); > } > xas_unlock_irq(&xas); Assuming we can iterate the array without encl->lock, I think this would translate to: /* * Not taking encl->lock because: * 1. page attributes are not written. * 2. the only page attribute read is set before it is put to the array * and stays constant throughout the enclave life-cycle. */ xas_lock(&xas); xas_for_each_marked(&xas, page, idx_end) { if (++tagged % XA_CHECK_SCHED) continue; xas_pause(&xas); xas_unlock(&xas); /* * Attributes are not protected by the xa_lock, so I'm assuming * that this is the legit place for the check. */ if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) return -EACCES; cond_resched(); xas_lock(&xas); } xas_unlock(&xas); Obviously, we cannot use this pattern by taking the encl->lock inside the loop (ABBA and encl->lock is a mutex). Let's enumerate: A. sgx_encl_add_page(): uses xa_insert() and xa_erase(). B. sgx_encl_load_page(): uses xa_load(). C. sgx_encl_may_map(): is broken (for the moment). A and B implicitly the lock and if a page exist at all we only access a pure constant. Also, since the open file keeps the instance alive, nobody is going to pull carpet under our feet. OK, I've just concluded tha we don't need to take encl->lock in this case. Great. /Jarkko