Received: by 2002:a05:6358:f14:b0:e5:3b68:ec04 with SMTP id b20csp1287521rwj; Fri, 23 Dec 2022 16:22:56 -0800 (PST) X-Google-Smtp-Source: AMrXdXs6sQagriHi/UucwsNlF20/dimUCdquHv9oQ1IOKeql3qUZ/DajEe7wVPQ0/qQ4kFEwI+ZU X-Received: by 2002:a17:906:4f14:b0:7c1:98e:ed20 with SMTP id t20-20020a1709064f1400b007c1098eed20mr9067107eju.35.1671841376606; Fri, 23 Dec 2022 16:22:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671841376; cv=none; d=google.com; s=arc-20160816; b=hYXTpujdtSfOmg9zFOieqTkUNoR66cICAkMgicvFC3nEJKN87hwT7EYhD3ks371Nor 8Fr/XCfRJq0CAFmE4HCMu9/R9//puu9FeIqAHivpD9R6N8XfYhTby5UhDCH0z0VCZ2XP Oa+MYFGzkvy1BXdNsXhHJJkPGUghosZu8Iozj22kkMa4/SBDOtzJVItv81DPidQWiyRF uMawaKzeTRg5Kb0FoPNfHEPKoEf0y3unbFyk5JFgjV20MloVDUGfPJLVF3dnH5+lQmpi XKom4Ay9xlk05TH0VUH2EqFNJGbDp5YIu66MJkybxDeIJY0Z/uaq6cgwSfS4iv8e5Exb 30Pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=E6NOXtQfRpTcR9UkGvQNL6sJIvRPjP++H9Z8XMaFd6M=; b=gKfHjapxZNf5N5ArQHanhRvrKP8HkwAfH/7qWxYplaeR7wNrr18vXid4OlaZkXkunN 85eNsx8rZaq+bOelxmXivpcZm6eBi/8dM+BjtF8rkrU0h97osF8mruaqZPPmzYiBVrEA qSv9UuGU4+b8HLe1Qywfi7SQ3Bt81XJP5vIkLTqeme+ilo+sxGvbZ1Vrcmq93uuWIC04 8Jln020uFamxNO7Q3CzoM4gG55kuyFJsCn+CD2gAKbYrlXN6XpjovVX4cJGbx1q3OQj0 UZxja2YgUQTmeIvLoiOgPp0jkh2SlLHOhoCkwDuzbm1V6LYx8Snhci9HshlegRS1QwuF liPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xe2-20020a170907318200b007c14affc530si3714453ejb.315.2022.12.23.16.22.41; Fri, 23 Dec 2022 16:22:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236522AbiLXAGE (ORCPT + 64 others); Fri, 23 Dec 2022 19:06:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233485AbiLXAFV (ORCPT ); Fri, 23 Dec 2022 19:05:21 -0500 Received: from 2.mo547.mail-out.ovh.net (2.mo547.mail-out.ovh.net [46.105.35.152]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79B0F1A3AA for ; Fri, 23 Dec 2022 16:04:43 -0800 (PST) Received: from ex4.mail.ovh.net (unknown [10.110.171.74]) by mo547.mail-out.ovh.net (Postfix) with ESMTPS id D5B3820ED6; Sat, 24 Dec 2022 00:04:41 +0000 (UTC) Received: from dev-fedora-x86-64.naccy.de (37.65.8.229) by DAG10EX1.indiv4.local (172.16.2.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.16; Sat, 24 Dec 2022 01:04:40 +0100 From: Quentin Deslandes To: CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Mykola Lysenko , Shuah Khan , Dmitrii Banshchikov , , , , , Kernel Team Subject: [PATCH bpf-next v3 13/16] bpfilter: add table code generation Date: Sat, 24 Dec 2022 01:03:59 +0100 Message-ID: <20221224000402.476079-14-qde@naccy.de> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221224000402.476079-1-qde@naccy.de> References: <20221224000402.476079-1-qde@naccy.de> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [37.65.8.229] X-ClientProxiedBy: CAS6.indiv4.local (172.16.1.6) To DAG10EX1.indiv4.local (172.16.2.91) X-Ovh-Tracer-Id: 4763119556604259959 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -85 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedvhedrheefgddujecutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfqggfjpdevjffgvefmvefgnecuuegrihhlohhuthemucehtddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenogetfedtuddqtdduucdludehmdenucfjughrpefhvfevufffkffojghfggfgtghisehtkeertdertddtnecuhfhrohhmpefsuhgvnhhtihhnucffvghslhgrnhguvghsuceoqhguvgesnhgrtggthidruggvqeenucggtffrrghtthgvrhhnpeduledugfeileetvdelieeujedttedtvedtgfetteevfeejhfffkeeujeetfffgudenucfkphepuddvjedrtddrtddruddpfeejrdeihedrkedrvddvleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepihhnvghtpeduvdejrddtrddtrddupdhmrghilhhfrhhomhepoehquggvsehnrggttgihrdguvgeqpdhnsggprhgtphhtthhopedupdhrtghpthhtohepjhholhhsrgeskhgvrhhnvghlrdhorhhgpdhlihhnuhigqdhkshgvlhhfthgvshhtsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdgsphhfsehvghgvrhdrkhgvrhhnvghlrdhorhhgpdhlihhnuhigqdhkvghrnhgvlhesvhhgvghrrdhkvghrnhgvlhdrohhrghdpmhgvsehusghiqhhuvgdrshhpsgdrrhhupdhshhhurghhsehkvghrnhgvlhdrohhrghdpmhihkhholhgrlhesfhgsrdgtohhmpdhprggsvghnihesrhgvughhrghtrdgtohhmpdhkuhgsrg eskhgvrhhnvghlrdhorhhgpdgvughumhgriigvthesghhoohhglhgvrdgtohhmpdgurghvvghmsegurghvvghmlhhofhhtrdhnvghtpdhkvghrnhgvlhdqthgvrghmsehmvghtrgdrtghomhdphhgrohhluhhosehgohhoghhlvgdrtghomhdpshgufhesghhoohhglhgvrdgtohhmpdhkphhsihhnghhhsehkvghrnhgvlhdrohhrghdpjhhohhhnrdhfrghsthgrsggvnhgusehgmhgrihhlrdgtohhmpdihhhhssehfsgdrtghomhdpshhonhhgsehkvghrnhgvlhdrohhrghdpmhgrrhhtihhnrdhlrghusehlihhnuhigrdguvghvpdgrnhgurhhiiheskhgvrhhnvghlrdhorhhgpdgurghnihgvlhesihhoghgvrghrsghogidrnhgvthdprghstheskhgvrhhnvghlrdhorhhgpdhnvghtuggvvhesvhhgvghrrdhkvghrnhgvlhdrohhrghdpoffvtefjohhsthepmhhoheegjedpmhhouggvpehsmhhtphhouhht X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Table code generation consists of multiple steps: 1) Find front and last rules for the supplied table and hook. 2) Try to generate code for each rule in [front rule; last rule]. 3) Try to generate each remaining subprog by its type. Co-developed-by: Dmitrii Banshchikov Signed-off-by: Dmitrii Banshchikov Signed-off-by: Quentin Deslandes --- net/bpfilter/codegen.c | 359 +++++++++++++++++++++++++++++++++++++++++ net/bpfilter/codegen.h | 2 + 2 files changed, 361 insertions(+) diff --git a/net/bpfilter/codegen.c b/net/bpfilter/codegen.c index e7ae7dfa5118..db0a20b378b5 100644 --- a/net/bpfilter/codegen.c +++ b/net/bpfilter/codegen.c @@ -4,15 +4,22 @@ * Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ +#define _GNU_SOURCE + #include "codegen.h" #include "../../include/uapi/linux/bpfilter.h" +#include +#include #include +#include + #include #include +#include #include #include #include @@ -20,6 +27,8 @@ #include #include "logger.h" +#include "rule.h" +#include "table.h" enum fixup_insn_type { FIXUP_INSN_OFF, @@ -53,6 +62,65 @@ static int subprog_desc_comparator(const void *x, const void *y) return -1; } +/** + * codegen_inline_memset_zero_64() - Generate eBPF bytecode to initialise a + * contiguous memory area with 0. + * @ctx: codegen context. + * @addr_reg: register containing the address of the memory area to initialise. + * The caller need to initialise the register properly before calling this + * function. + * @size: size of the memory area. As this function initialises 64 bits at a + * time, @size needs to be a multiple of 64 bits. If it doesn't, the + * function return without intialising the memory and an error message is + * printed out. + * + * Return: 0 on success, negative errno value on error. + */ +static int codegen_inline_memset_zero_64(struct codegen *ctx, int reg, + size_t size) +{ + if (size % 8) { + BFLOG_ERR("codegen_memset_zero_64() called with size %ld, size must be a multiple of 8", + size); + return -EINVAL; + } + + for (size_t i = 0; i * 8 < size; ++i) + EMIT(ctx, BPF_ST_MEM(BPF_DW, reg, i * 8, 0)); + + return 0; +} + +static int codegen_push_subprog(struct codegen *codegen, + struct codegen_subprog_desc *subprog) +{ + // TODO: merge this with codegen_fixup_push. + + if (codegen->subprogs_cur == codegen->subprogs_max) { + struct codegen_subprog_desc **subprogs; + uint16_t subprogs_max; + + subprogs_max = codegen->subprogs_cur ? + 2 * codegen->subprogs_cur : 1; + subprogs = reallocarray(codegen->subprogs, subprogs_max, + sizeof(codegen->subprogs[0])); + if (!subprogs) { + BFLOG_ERR("out of memory"); + return -ENOMEM; + } + + codegen->subprogs_max = subprogs_max; + codegen->subprogs = subprogs; + } + + codegen->subprogs[codegen->subprogs_cur++] = subprog; + + qsort(codegen->subprogs, codegen->subprogs_cur, + sizeof(codegen->subprogs[0]), subprog_desc_comparator); + + return 0; +} + static const struct codegen_subprog_desc *codegen_find_subprog(struct codegen *codegen, const struct codegen_subprog_desc **subprog) { @@ -601,6 +669,297 @@ int create_codegen(struct codegen *codegen, enum bpf_prog_type type) return r; } +static int try_codegen_rules(struct codegen *codegen, struct rule *rule_front, + struct rule *rule_last) +{ + int r; + + for (; rule_front <= rule_last; ++rule_front, ++codegen->rule_index) { + rule_front->index = codegen->rule_index; + r = gen_inline_rule(codegen, rule_front); + if (r) { + BFLOG_ERR("failed to generate inline rule: %s", + STRERR(r)); + return r; + } + + r = codegen_fixup(codegen, CODEGEN_FIXUP_NEXT_RULE); + if (r) { + BFLOG_ERR("failed to generate next rule fixups: %s", + STRERR(r)); + return r; + } + + r = codegen_fixup(codegen, CODEGEN_FIXUP_COUNTERS_INDEX); + if (r) { + BFLOG_ERR("failed to generate counters fixups: %s", + STRERR(r)); + return r; + } + } + + r = codegen_fixup(codegen, CODEGEN_FIXUP_END_OF_CHAIN); + if (r) { + BFLOG_ERR("failed to generate end of chain fixups: %s", + STRERR(r)); + return r; + } + + return 0; +} + +static struct rule *table_find_last_rule(const struct table *table, + struct rule *rule_front) +{ + for (; rule_front; ++rule_front) { + if (!rule_is_unconditional(rule_front)) + continue; + + if (!rule_has_standard_target(rule_front)) + continue; + + if (standard_target_verdict(rule_front->target.ipt_target) >= 0) + continue; + + return rule_front; + } + + return rule_front; +} + +static int try_codegen_user_chain_subprog(struct codegen *codegen, + const struct table *table, + struct codegen_subprog_desc *subprog) +{ + struct rule *rule_front; + struct rule *rule_last; + int r; + + rule_front = table_find_rule_by_offset(table, subprog->offset); + if (!rule_front) { + BFLOG_ERR("failed to get rule at offset %d", subprog->offset); + return -EINVAL; + } + + rule_last = table_find_last_rule(table, rule_front); + if (!rule_last) { + BFLOG_ERR("failed to find last rule"); + return -EINVAL; + } + + subprog->insn = codegen->len_cur; + codegen->rule_index = rule_front - table->rules; + r = try_codegen_rules(codegen, rule_front, rule_last); + if (r) { + BFLOG_ERR("failed to generate rules"); + return r; + } + + return codegen_push_subprog(codegen, subprog); +} + +static int try_codegen_subprogs(struct codegen *codegen, const struct table *table) +{ + while (!list_empty(&codegen->awaiting_subprogs)) { + struct codegen_subprog_desc *subprog; + int r = -EINVAL; + + subprog = list_entry(codegen->awaiting_subprogs.next, + struct codegen_subprog_desc, + list); + + if (subprog->type == CODEGEN_SUBPROG_USER_CHAIN) { + r = try_codegen_user_chain_subprog(codegen, table, + subprog); + if (r < 0) { + BFLOG_ERR("failed to generate code for user defined chain: %s", + STRERR(r)); + return r; + } + } else { + BFLOG_ERR("code generation for subprogram of type %d is not supported", + subprog->type); + return -EINVAL; + } + + list_del(&subprog->list); + } + + return 0; +} + +/** + * generate_inline_forward_packet_assessment() - Add eBPF bytecode to assess + * whether the current packet is to be forwarded or not. + * @ctx: context to add the bytecode to. + * + * Use bpf_fib_lookup() to find out whether the current packet is to be + * forwarded or not. bpf_fib_lookup() requires a struct bpf_fib_lookup to be + * filled with data from the packet. + * The outcome of the bytecode will depend on the actual iptables hook used: + * BPFILTER_INET_HOOK_FORWARD's chain will be processed if the packet is to be + * forwarded, or will be skipped otherwise and jump to the next chain. The + * opposite behaviour apply if hook is BPFILTER_INET_HOOK_LOCAL_IN. + * + * Return: 0 on success, negative errno value on error. + */ +static int generate_inline_forward_packet_assessment(struct codegen *ctx) +{ + // Set ARG2 to contain the address of the struct bpf_fib_lookup. + EMIT(ctx, BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_10)); + EMIT(ctx, BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, + (-(unsigned int)sizeof(struct bpf_fib_lookup)) - sizeof(struct runtime_context))); + + codegen_inline_memset_zero_64(ctx, BPF_REG_ARG2, + sizeof(struct bpf_fib_lookup)); + + // Store h_proto for further decision + EMIT(ctx, BPF_LDX_MEM(BPF_H, CODEGEN_REG_SCRATCH3, CODEGEN_REG_L3, + -(short int)(ETH_HLEN) + (short int)offsetof(struct ethhdr, h_proto))); + EMIT(ctx, BPF_JMP_IMM(BPF_JEQ, CODEGEN_REG_SCRATCH3, + bpf_htons(ETH_P_IP), 2)); + + // Not IPv4? Then do no process any FORWARD rule and move to the next chain (program). + EMIT(ctx, BPF_MOV64_IMM(CODEGEN_REG_RETVAL, TC_ACT_UNSPEC)); + EMIT_FIXUP(ctx, CODEGEN_FIXUP_END_OF_CHAIN, BPF_JMP_A(0)); + + // bpf_fib_lookup.family field + EMIT(ctx, BPF_ST_MEM(BPF_B, BPF_REG_ARG2, + offsetof(struct bpf_fib_lookup, family), AF_INET)); + + // bpf_fib_lookup.l4_protocol field + EMIT(ctx, BPF_LDX_MEM(BPF_B, CODEGEN_REG_SCRATCH3, CODEGEN_REG_L3, + offsetof(struct iphdr, protocol))); + EMIT(ctx, BPF_STX_MEM(BPF_B, BPF_REG_ARG2, CODEGEN_REG_SCRATCH3, + offsetof(struct bpf_fib_lookup, l4_protocol))); + + // bpf_fib_lookup.tot_len field + EMIT(ctx, BPF_LDX_MEM(BPF_H, CODEGEN_REG_SCRATCH3, CODEGEN_REG_L3, + offsetof(struct iphdr, tot_len))); + EMIT(ctx, BPF_STX_MEM(BPF_H, BPF_REG_ARG2, CODEGEN_REG_SCRATCH3, + offsetof(struct bpf_fib_lookup, tot_len))); + + // bpf_fib_lookup.ifindex field + EMIT(ctx, BPF_LDX_MEM(BPF_W, CODEGEN_REG_SCRATCH3, CODEGEN_REG_CTX, + offsetof(struct __sk_buff, ingress_ifindex))); + EMIT(ctx, BPF_STX_MEM(BPF_W, BPF_REG_ARG2, CODEGEN_REG_SCRATCH3, + offsetof(struct bpf_fib_lookup, ifindex))); + + // bpf_fib_lookup.tos field + EMIT(ctx, BPF_LDX_MEM(BPF_B, CODEGEN_REG_SCRATCH3, CODEGEN_REG_L3, + offsetof(struct iphdr, tos))); + EMIT(ctx, BPF_STX_MEM(BPF_B, BPF_REG_ARG2, CODEGEN_REG_SCRATCH3, + offsetof(struct bpf_fib_lookup, tos))); + + // bpf_fib_lookup.ipv4_src and bpf_fib_lookup.ipv4_dst fields + EMIT(ctx, BPF_LDX_MEM(BPF_W, CODEGEN_REG_SCRATCH3, CODEGEN_REG_L3, + offsetof(struct iphdr, saddr))); + EMIT(ctx, BPF_STX_MEM(BPF_W, BPF_REG_ARG2, CODEGEN_REG_SCRATCH3, + offsetof(struct bpf_fib_lookup, ipv4_src))); + EMIT(ctx, BPF_LDX_MEM(BPF_W, CODEGEN_REG_SCRATCH3, CODEGEN_REG_L3, + offsetof(struct iphdr, daddr))); + EMIT(ctx, BPF_STX_MEM(BPF_W, BPF_REG_ARG2, CODEGEN_REG_SCRATCH3, + offsetof(struct bpf_fib_lookup, ipv4_dst))); + + EMIT(ctx, BPF_MOV64_REG(BPF_REG_ARG1, CODEGEN_REG_CTX)); + EMIT(ctx, BPF_MOV64_IMM(BPF_REG_ARG3, sizeof(struct bpf_fib_lookup))); + EMIT(ctx, BPF_MOV64_IMM(BPF_REG_ARG4, 0)); + + EMIT(ctx, BPF_EMIT_CALL(BPF_FUNC_fib_lookup)); + EMIT(ctx, BPF_MOV64_REG(CODEGEN_REG_SCRATCH3, CODEGEN_REG_RETVAL)); + EMIT(ctx, BPF_MOV64_IMM(CODEGEN_REG_RETVAL, TC_ACT_UNSPEC)); + EMIT_FIXUP(ctx, CODEGEN_FIXUP_END_OF_CHAIN, + BPF_JMP_IMM(ctx->iptables_hook == BPFILTER_INET_HOOK_FORWARD ? BPF_JNE : BPF_JEQ, + CODEGEN_REG_SCRATCH3, BPF_FIB_LKUP_RET_SUCCESS, 0)); + + return 0; +} + +int try_codegen(struct codegen *codegen, const struct table *table) +{ + struct rule *rule_front; + struct rule *rule_last; + int r; + + r = codegen->codegen_ops->gen_inline_prologue(codegen); + if (r) { + BFLOG_ERR("failed to generate inline prologue: %s", STRERR(r)); + return r; + } + + r = codegen->codegen_ops->load_packet_data(codegen, CODEGEN_REG_L3); + if (r) { + BFLOG_ERR("failed to generate code to load packet data: %s", + STRERR(r)); + return r; + } + + r = codegen->codegen_ops->load_packet_data_end(codegen, + CODEGEN_REG_DATA_END); + if (r) { + BFLOG_ERR("failed to generate code to load packet data end: %s", + STRERR(r)); + return r; + } + + // save packet size once + EMIT(codegen, BPF_MOV64_REG(CODEGEN_REG_SCRATCH2, CODEGEN_REG_DATA_END)); + EMIT(codegen, BPF_ALU64_REG(BPF_SUB, CODEGEN_REG_SCRATCH2, CODEGEN_REG_L3)); + EMIT(codegen, BPF_STX_MEM(BPF_W, CODEGEN_REG_RUNTIME_CTX, CODEGEN_REG_SCRATCH2, + STACK_RUNTIME_CONTEXT_OFFSET(data_size))); + + EMIT(codegen, BPF_ALU64_IMM(BPF_ADD, CODEGEN_REG_L3, ETH_HLEN)); + EMIT_FIXUP(codegen, CODEGEN_FIXUP_END_OF_CHAIN, + BPF_JMP_REG(BPF_JGT, CODEGEN_REG_L3, CODEGEN_REG_DATA_END, 0)); + EMIT(codegen, BPF_MOV64_REG(CODEGEN_REG_SCRATCH1, CODEGEN_REG_L3)); + EMIT(codegen, BPF_ALU64_IMM(BPF_ADD, CODEGEN_REG_SCRATCH1, sizeof(struct iphdr))); + EMIT_FIXUP(codegen, CODEGEN_FIXUP_END_OF_CHAIN, + BPF_JMP_REG(BPF_JGT, CODEGEN_REG_SCRATCH1, CODEGEN_REG_DATA_END, 0)); + + if (codegen->iptables_hook == BPFILTER_INET_HOOK_LOCAL_IN || + codegen->iptables_hook == BPFILTER_INET_HOOK_FORWARD) { + /* There is no XDP nor TC forward hook to attach to. So, we + * need to add code to assess whether a incoming packet it + * to be forwarded or not. + */ + BFLOG_NOTICE("generate forward packet assessment"); + generate_inline_forward_packet_assessment(codegen); + } + + rule_front = &table->rules[table->hook_entry[codegen->iptables_hook]]; + rule_last = &table->rules[table->underflow[codegen->iptables_hook]]; + + codegen->rule_index = rule_front - table->rules; + r = try_codegen_rules(codegen, rule_front, rule_last); + if (r) { + BFLOG_ERR("failed to generate rules: %s", STRERR(r)); + return r; + } + + r = codegen->codegen_ops->gen_inline_epilogue(codegen); + if (r) { + BFLOG_ERR("failed to generate inline epilogue: %s", + STRERR(r)); + return r; + } + + r = try_codegen_subprogs(codegen, table); + if (r) { + BFLOG_ERR("failed to generate subprograms: %s", STRERR(r)); + return r; + } + + r = codegen_fixup(codegen, CODEGEN_FIXUP_JUMP_TO_CHAIN); + if (r) { + BFLOG_ERR("failed to generate fixups: %s", STRERR(r)); + return r; + } + + codegen->shared_codegen->maps[CODEGEN_RELOC_MAP].max_entries = table->num_rules; + + return 0; +} + int load_img(struct codegen *codegen) { union bpf_attr attr = {}; diff --git a/net/bpfilter/codegen.h b/net/bpfilter/codegen.h index cca45a13c4aa..6cfd8e7a3692 100644 --- a/net/bpfilter/codegen.h +++ b/net/bpfilter/codegen.h @@ -18,6 +18,7 @@ #include struct context; +struct table; #define CODEGEN_REG_RETVAL BPF_REG_0 #define CODEGEN_REG_SCRATCH1 BPF_REG_1 @@ -174,6 +175,7 @@ int codegen_fixup(struct codegen *codegen, enum codegen_fixup_type fixup_type); int emit_fixup(struct codegen *codegen, enum codegen_fixup_type fixup_type, struct bpf_insn insn); int emit_add_counter(struct codegen *codegen); +int try_codegen(struct codegen *codegen, const struct table *table); int load_img(struct codegen *codegen); void unload_img(struct codegen *codegen); void free_codegen(struct codegen *codegen); -- 2.38.1