Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp126523pxb; Thu, 27 Jan 2022 16:55:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJz8Rf2zX6MixMir8E5QT1+lq5BV3GpAF+B2n0wB38vQDXSRmRaTOSEg6gUhGWaPjDAZiLOt X-Received: by 2002:a63:4147:: with SMTP id o68mr4736196pga.476.1643331319941; Thu, 27 Jan 2022 16:55:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643331319; cv=none; d=google.com; s=arc-20160816; b=McKwjMqIx2CyM0BivqYCBWMjlDZ/IezbsUCL+hPOO3LLndXwhlwflWUhllZL6f/61C pAux1FLz14DgvSDkqSwp17MN0B6CFd6sfYVBm6CWzrFBZHMSmzimuL05evKWwuw6p+Cq owQbS3S9Am5hu0Is+PDgUZF0/DvolMShgRUqiZ/596QVXCTH0kDPafuQuHysedm9WiIS jrnYg3W+zJC2a3WWlgEtIGpzQUQnnQaGy8Ai3aaAb5IgsF3GK1Bxir9a8g61vYd6/DT2 kmBJ7R5/42yQeWT1RfVlZ+hT1uAW48KwFa3FKLpjb8ChBy/kes/llXnPGkXcXSfj0gpu PgzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=MhNAhT40n/LFOGYqyI3C9ZfSM066+6k6EAXHuOXl0wM=; b=GUBMrRtOwiPnJCNijKovmDztdSsfj6TrM3rwQytmYZNmyQsNkh8IqSeVCtunA9ghd0 y53ReDiBlbr8/WaQQQug4NlMgOeJeIKnVhaMm1KMgFar6lvYaPa1zISS9gVkm9TD+FiR g15AkAuxIU+8HvQRIMVBnSlcCGV0PANZcSlMmndFe549Kcg/8SLDpWuMdCzb3Jy1Fk46 KfOslB7mBwT/jrMtOzjjlXdC7pHF57a085DP7Y90mXl3m+2xNt7Ng4hFfAKYV06AekBU 3Q8HqUmttGrl0rzBr/cMbp5p9qNVWf0yspjyl+svwjfLFWFuikAZtwdFsEhTN4j8o4kE ekkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s201si3368792pfc.217.2022.01.27.16.55.08; Thu, 27 Jan 2022 16:55:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236669AbiA0NYP (ORCPT + 99 others); Thu, 27 Jan 2022 08:24:15 -0500 Received: from foss.arm.com ([217.140.110.172]:33148 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229565AbiA0NYO (ORCPT ); Thu, 27 Jan 2022 08:24:14 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 225B51063; Thu, 27 Jan 2022 05:24:14 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.14.34]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D36163F766; Thu, 27 Jan 2022 05:24:12 -0800 (PST) Date: Thu, 27 Jan 2022 13:24:00 +0000 From: Mark Rutland To: Ard Biesheuvel Cc: Yinan Liu , Steven Rostedt , "open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)" , Sachin Sant , Linux Kernel Mailing List , Kees Cook Subject: Re: [powerpc] ftrace warning kernel/trace/ftrace.c:2068 with code-patching selftests Message-ID: References: <20220124114548.30241947@gandalf.local.home> <0fa0daec-881a-314b-e28b-3828e80bbd90@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 27, 2022 at 02:07:03PM +0100, Ard Biesheuvel wrote: > On Thu, 27 Jan 2022 at 13:59, Mark Rutland wrote: > > > > On Thu, Jan 27, 2022 at 01:22:17PM +0100, Ard Biesheuvel wrote: > > > On Thu, 27 Jan 2022 at 13:20, Mark Rutland wrote: > > > > On Thu, Jan 27, 2022 at 01:03:34PM +0100, Ard Biesheuvel wrote: > > > > > > > > > These architectures use place-relative extables for the same reason: > > > > > place relative references are resolved at build time rather than at > > > > > runtime during relocation, making a build time sort feasible. > > > > > > > > > > arch/alpha/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/arm64/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/ia64/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/parisc/include/asm/uaccess.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/powerpc/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/riscv/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/s390/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > arch/x86/include/asm/extable.h:#define ARCH_HAS_RELATIVE_EXTABLE > > > > > > > > > > Note that the swap routine becomes something like the below, given > > > > > that the relative references need to be fixed up after the entry > > > > > changes place in the sorted list. > > > > > > > > > > static void swap_ex(void *a, void *b, int size) > > > > > { > > > > > struct exception_table_entry *x = a, *y = b, tmp; > > > > > int delta = b - a; > > > > > > > > > > tmp = *x; > > > > > x->insn = y->insn + delta; > > > > > y->insn = tmp.insn - delta; > > > > > ... > > > > > } > > > > > > > > > > As a bonus, the resulting footprint of the table in the image is > > > > > reduced by 8x, given that every 8 byte pointer has an accompanying 24 > > > > > byte RELA record, so we go from 32 bytes to 4 bytes for every call to > > > > > __gnu_mcount_mc. > > > > > > > > Absolutely -- it'd be great if we could do that for the callsite locations; the > > > > difficulty is that the entries are generated by the compiler itself, so we'd > > > > either need some build/link time processing to convert each absolute 64-bit > > > > value to a relative 32-bit offset, or new compiler options to generate those as > > > > relative offsets from the outset. > > > > > > Don't we use scripts/recordmcount.pl for that? > > > > Not quite -- we could adjust it to do so, but today it doesn't consider > > existing mcount_loc entries, and just generates new ones where the compiler has > > generated calls to mcount, which it finds by scanning the instructions in the > > binary. Today it is not used: > > > > * On arm64 when we default to using `-fpatchable-function-entry=N`. That makes > > the compiler insert 2 NOPs in the function prologue, and log the location of > > that NOP sled to a section called. `__patchable_function_entries`. > > > > We need the compiler to do that since we can't reliably identify 2 NOPs in a > > function prologue as being intended to be a patch site, as e.g. there could > > be notrace functions where the compiler had to insert NOPs for alignment of a > > subsequent brnach or similar. > > > > * On architectures with `-nop-mcount`. On these, it's necessary to use > > `-mrecord-mcount` to have the compiler log the patch-site, for the same > > reason as with `-fpatchable-function-entry`. > > > > * On architectures with `-mrecord-mcount` generally, since this avoids the > > overhead of scanning each object. > > > > * On x86 when objtool is used. > > > > Right. > > I suppose that on arm64, we can work around this by passing > --apply-dynamic-relocs to the linker, so that all R_AARCH64_RELATIVE > targets are prepopulated with the link time value of the respective > addresses. It does cause some bloat, which is why we disable that > today, but we could make that dependent on ftrace being enabled. We'd also need to teach the build-time sort to update the relocations, unless you mean to also change the boot-time reloc code to RMW with the offset? I think for right now the best thing is to disable the build-time sort for arm64, but maybe something like that is the right thing to do longer term. > I do wonder how much over head we accumulate, though, by having all > these relocations, but I suppose that is the situation today in any > case. Yeah; I suspect if we want to do something about that we want to do it more generally, and would probably need to do something like the x86 approach and rewrite the relocs at build-time to something more compressed. If we applied the dynamic relocs with the link-time address we'd only need 4 bytes to identify each pointer to apply an offset to. I'm not exactly sure how we could do that, nor what the trade-off look like in practice. THanks, Mark.