Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp5735969ybg; Tue, 22 Oct 2019 07:39:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqzN7RyMGzFE3layWpON5OZqGmsrP4+KBjD+JpvbD+rgk2aL8FQJryuLyY7mDzS7uWVn20xf X-Received: by 2002:a17:906:3949:: with SMTP id g9mr25009130eje.171.1571755158644; Tue, 22 Oct 2019 07:39:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571755158; cv=none; d=google.com; s=arc-20160816; b=WbKr6RFfIqQ3nMLCi7qn/FzWW6Ss3+QuBPWoZWx3A6GMXWJhfdE7wkdwCzpCqbKV6R aaQm8Kjflwy6eXIdmjvdnn6H/magF16kWB8C9C9VGTq77A6dNRR9uOkSyb6ZTzED1eXv oGN31RwNr16rTalgHhf9TwXlK9XRUXZf39vw8G96QIeRjMEfFhvFFaudQ/nW/ZlziJaE dr4dB2QWG8hsoHfOHeXMyO6v+Ri4pSWr9yq1BvRuoA+y9HRzYRr+dVzh1b969OpJmPWo 1O1yAN1Y4WhTJyCsG9bQkyaRPYRWE/vcQolXOMvDTZXx6kkeg5PzRDc/w9VooANGWWWM Ab2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:content-disposition :mime-version:message-id:subject:cc:to:from:date; bh=koKq1HzwCB2Q9kVjmYjcNZPSuuGGV04Sqc8YqWy/S/M=; b=wdW7GioRZebhUa+PcvDh+KiqQwLVPa2Llv3J/ZVKCpLRsSxHv05D5k+pTo49VkYF0x ArYnMctf+BJryx9RtXpxnkijiFobjR+ccJpNP7hsN9oekFGHesC0za9CP76w0Qb3Sm5M 7GOPJeo33G+82c4UhOwQPaVrHzz72fE/Z0cD0MqQsDekfQUr1uzsv+4Kug2GZwu4YC3t PmUYuo3OuCAFlKCjGHv3pXdCrM6oIFzoQJ3PNQXbJgc1LjiKj11JVsEprg8tJsXWc72B +eDZth4he124PA+9r8LHlCYynZVY+Mpah7JpIXSHpeQ49L/7e92Ovncf70m7BkRe+QEY nBaw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d25si3804171edb.19.2019.10.22.07.38.54; Tue, 22 Oct 2019 07:39:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387575AbfJVOgI (ORCPT + 99 others); Tue, 22 Oct 2019 10:36:08 -0400 Received: from mx2.suse.de ([195.135.220.15]:60374 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726915AbfJVOgH (ORCPT ); Tue, 22 Oct 2019 10:36:07 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id CC66DB022; Tue, 22 Oct 2019 14:36:05 +0000 (UTC) Date: Tue, 22 Oct 2019 16:36:04 +0200 From: Cyril Hrubis To: Alexei Starovoitov , Daniel Borkmann , Martin KaFai Lau , Song Liu , Yonghong Song , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, ltp@lists.linux.it, Richard Palethorpe Subject: EPERM failures for repeated runs Message-ID: <20191022143604.GA18468@rei> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi! Lately we started to write BPF testcases for LTP and after writing a first few tests we found out that running more than a few in a row causes them to fail with EPERM. The culprit is deferred cleanup of the bpf maps that are locked in the memory, see: http://lists.linux.it/pipermail/ltp/2019-August/013349.html We worked around that by bumping the limit for the tests in: https://github.com/linux-test-project/ltp/commit/85c4e886b357f7844f6ab8ec5719168c38703a76 But it looks like this value will not scale, especially for architectures that have larger than 4k pages, running four BPF tests in a row still fails on ppc64le even with the increased limit. Perhaps I'm naive but can't we check, in the kernel, if there is deferred cleanup in progress if we fail to lock memory for a map and retry once it's done? Or is this intended behavior and should we retry on EPERM in userspace? -- Cyril Hrubis chrubis@suse.cz