Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp7151785rwb; Tue, 6 Dec 2022 01:45:09 -0800 (PST) X-Google-Smtp-Source: AA0mqf5NunMqJ5GXrJWF+3Ze3uy87jFwQZKtNdYctoQ2zSet5jVPf2h000BGCwMHLaCo6pGTYk8L X-Received: by 2002:a63:f510:0:b0:478:1020:b1e6 with SMTP id w16-20020a63f510000000b004781020b1e6mr37450931pgh.561.1670319909591; Tue, 06 Dec 2022 01:45:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670319909; cv=none; d=google.com; s=arc-20160816; b=kak2OASX3fz1PsnbqHciLsBF4NAYA7a9MBGNGhmX4fs0hyrWYNlzS2jqgDlmGz72jq 6fPaVww68VSFape7OGemOZp1NhAc8iM4pdvT67PvyLqHjWsF49VZ+XM44nH+fShRKTOT PnFGbKydzsODa8Bq9HRqZpxXZZ1CaFJFCdDdVXD9eFnwRurcyPyUed6hbMcjkZq3p12i OZ+UdXGKnMLzR0qwuLZ3o/awTvyKoEumKyTLLeJteATFRdwnLq9UXQdQYLL77SY3L6xy cX9Fk7Qn+HjfNpib+yFLBCdazvg3nAyaKQV6HhCCYai10X9CCGFq3MSFN7EzdeisVuBr xRBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=m9covplz1FXHTmQsH1JfjKP6msyoK/uzDf0KnqUp54Q=; b=N4B8Q6j6f8QWWl9fxtCB7W/paKwxxGaYzh5lLmL22Je0HbA50drusMrm9gABMBjFUB DK05UqtAQUk9kYN8Js1NUINFT9dn/67MOLLAEwNa3TCfyH6j8QYhmy9kGU8uxTVtqha9 yYno3VlsZccJGv1/HxZp9jTM33TTe9Jn9efz4Kd1VQ7Gk3OU9Ru9fLsgvSZ8qcmstU6A jDLL+lm7sJ8NPCrduWn3gzMLwb1sNUES66DXAR3rmba0w2dzf1wyTkhiGovOX72wqDpf WTtGBZuhMJnQo3DziPJ9E1K1HYFOylnF1bc4mt1IzSLgpAQj3E43fAeef6xJ2ZO3/oal OMYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@resnulli-us.20210112.gappssmtp.com header.s=20210112 header.b=ZZ0V2CLo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p4-20020a63e644000000b0045466821a09si16819760pgj.371.2022.12.06.01.44.59; Tue, 06 Dec 2022 01:45:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@resnulli-us.20210112.gappssmtp.com header.s=20210112 header.b=ZZ0V2CLo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234729AbiLFJS5 (ORCPT + 80 others); Tue, 6 Dec 2022 04:18:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234767AbiLFJQ2 (ORCPT ); Tue, 6 Dec 2022 04:16:28 -0500 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9382A21E29 for ; Tue, 6 Dec 2022 01:16:08 -0800 (PST) Received: by mail-ed1-x535.google.com with SMTP id l11so19409543edb.4 for ; Tue, 06 Dec 2022 01:16:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20210112.gappssmtp.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=m9covplz1FXHTmQsH1JfjKP6msyoK/uzDf0KnqUp54Q=; b=ZZ0V2CLofoZjipLPa+LqpIOUdNrZJcZ7OlOlHq+lMqAkHRVmUFOKGcbb7vc/Wzbp4s FUtLEeFiR/KNhWwj/dygnuuHo5CC7atfJQMmQuilC2xAOO7MuzZ4wiN0s+cix/kmdBtK Rxiba1HkVX0+BNVTkRa6AKDhbQ6ca6L2OcLGxMnBGApozFBUF1mWuYa00N+MxmEfGfrp kC3LdjGfGYXcD6w3f2eJKseTqJb754O81tt9x6LmZF4+THbGMZIF9o8UqSugodLU3fCC x6IWIZnyZR7L1gSG4yCbX+QP+IJdftJUABIuycuZllqkUWt314s7Vnnjd/as8l4+IkLu xVnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=m9covplz1FXHTmQsH1JfjKP6msyoK/uzDf0KnqUp54Q=; b=AWpoB+ew+QuqlecQ1qmpDKOH5Pr92WZ0wB6bbi+3UH9b3RygVPSR+PZlKG0LSycPSn YQ77B6LsoTQYbuPnufDS/9tOI6amU80aF+iC5wTUu9Gg6iyCTw7/tpoLUylhT0Fhtc9S 8+P8T8dKvRT9Z2+AZF0xitj6P9j6Ti9qRYuJCIaaxPm+P8fcDniDPFVMhwaFt1GPk3Xq u8UCwhSNA8ivMoFoXc0h0HsEuEEf+KwKnrXwHEXV/7NFqLucYl5nuqT7UoBaanD0tVJf vHgKAHT+PJkOA5xJudxZypnPtaHqTfc/8PTfhc637bT80ic/apBwq9TDy5DLhMv+0KNg Zatg== X-Gm-Message-State: ANoB5plw5CdmJvVFA1TSpEn7zIkccWWGAuRXlqu9oocmvpCO1tmnN6m9 8+nOCQEFL7+sE8YHXDHIVX5zfg== X-Received: by 2002:a05:6402:64b:b0:46c:11ef:7b72 with SMTP id u11-20020a056402064b00b0046c11ef7b72mr16568581edx.235.1670318167113; Tue, 06 Dec 2022 01:16:07 -0800 (PST) Received: from localhost (host-213-179-129-39.customer.m-online.net. [213.179.129.39]) by smtp.gmail.com with ESMTPSA id du1-20020a17090772c100b007c07909eb9asm7155608ejc.1.2022.12.06.01.16.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 06 Dec 2022 01:16:06 -0800 (PST) Date: Tue, 6 Dec 2022 10:16:05 +0100 From: Jiri Pirko To: ehakim@nvidia.com Cc: linux-kernel@vger.kernel.org, raeds@nvidia.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org, sd@queasysnail.net Subject: Re: [PATCH net-next v2] macsec: Add support for IFLA_MACSEC_OFFLOAD in the netlink layer Message-ID: References: <20221206085757.5816-1-ehakim@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221206085757.5816-1-ehakim@nvidia.com> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Tue, Dec 06, 2022 at 09:57:57AM CET, ehakim@nvidia.com wrote: >From: Emeel Hakim > >This adds support for configuring Macsec offload through the Tell the codebase what to do. Be imperative in your patch descriptions so it is clear what are the intensions of the patch. >netlink layer by: >- Considering IFLA_MACSEC_OFFLOAD in macsec_fill_info. >- Handling IFLA_MACSEC_OFFLOAD in macsec_changelink. >- Adding IFLA_MACSEC_OFFLOAD to the netlink policy. >- Adjusting macsec_get_size. 4 patches then? I mean really, not a macsec person, but I should be able to follow what are your intensions looking and description&code right away. > >The handling in macsec_changlink is similar to s/macsec_changlink/macsec_changelink/ >macsec_upd_offload. >Update macsec_upd_offload to use a common helper function >to avoid duplication. > >Example for setting offload for a macsec device > ip link set macsec0 type macsec offload mac > >Reviewed-by: Raed Salem >Signed-off-by: Emeel Hakim >--- >V1 -> V2: Add common helper to avoid duplicating code > drivers/net/macsec.c | 114 ++++++++++++++++++++++++++++--------------- > 1 file changed, 74 insertions(+), 40 deletions(-) > >diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c >index d73b9d535b7a..afd6ff47ba56 100644 >--- a/drivers/net/macsec.c >+++ b/drivers/net/macsec.c >@@ -2583,16 +2583,45 @@ static bool macsec_is_configured(struct macsec_dev *macsec) > return false; > } > >+static int macsec_update_offload(struct macsec_dev *macsec, enum macsec_offload offload) >+{ >+ enum macsec_offload prev_offload; >+ const struct macsec_ops *ops; >+ struct macsec_context ctx; >+ int ret = 0; >+ >+ prev_offload = macsec->offload; >+ >+ /* Check if the device already has rules configured: we do not support >+ * rules migration. >+ */ >+ if (macsec_is_configured(macsec)) >+ return -EBUSY; >+ >+ ops = __macsec_get_ops(offload == MACSEC_OFFLOAD_OFF ? prev_offload : offload, >+ macsec, &ctx); >+ if (!ops) >+ return -EOPNOTSUPP; >+ >+ macsec->offload = offload; >+ >+ ctx.secy = &macsec->secy; >+ ret = (offload == MACSEC_OFFLOAD_OFF) ? macsec_offload(ops->mdo_del_secy, &ctx) : >+ macsec_offload(ops->mdo_add_secy, &ctx); >+ >+ if (ret) >+ macsec->offload = prev_offload; >+ >+ return ret; >+} >+ > static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) > { > struct nlattr *tb_offload[MACSEC_OFFLOAD_ATTR_MAX + 1]; >- enum macsec_offload offload, prev_offload; >- int (*func)(struct macsec_context *ctx); > struct nlattr **attrs = info->attrs; >- struct net_device *dev; >- const struct macsec_ops *ops; >- struct macsec_context ctx; >+ enum macsec_offload offload; > struct macsec_dev *macsec; >+ struct net_device *dev; > int ret; > > if (!attrs[MACSEC_ATTR_IFINDEX]) >@@ -2629,39 +2658,7 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info) > > rtnl_lock(); > >- prev_offload = macsec->offload; >- macsec->offload = offload; >- >- /* Check if the device already has rules configured: we do not support >- * rules migration. >- */ >- if (macsec_is_configured(macsec)) { >- ret = -EBUSY; >- goto rollback; >- } >- >- ops = __macsec_get_ops(offload == MACSEC_OFFLOAD_OFF ? prev_offload : offload, >- macsec, &ctx); >- if (!ops) { >- ret = -EOPNOTSUPP; >- goto rollback; >- } >- >- if (prev_offload == MACSEC_OFFLOAD_OFF) >- func = ops->mdo_add_secy; >- else >- func = ops->mdo_del_secy; >- >- ctx.secy = &macsec->secy; >- ret = macsec_offload(func, &ctx); >- if (ret) >- goto rollback; >- >- rtnl_unlock(); >- return 0; >- >-rollback: >- macsec->offload = prev_offload; >+ ret = macsec_update_offload(macsec, offload); > > rtnl_unlock(); > return ret; >@@ -3698,6 +3695,7 @@ static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = { > [IFLA_MACSEC_SCB] = { .type = NLA_U8 }, > [IFLA_MACSEC_REPLAY_PROTECT] = { .type = NLA_U8 }, > [IFLA_MACSEC_VALIDATION] = { .type = NLA_U8 }, >+ [IFLA_MACSEC_OFFLOAD] = { .type = NLA_U8 }, > }; > > static void macsec_free_netdev(struct net_device *dev) >@@ -3803,6 +3801,29 @@ static int macsec_changelink_common(struct net_device *dev, > return 0; > } > >+static int macsec_changelink_upd_offload(struct net_device *dev, struct nlattr *data[]) >+{ >+ enum macsec_offload offload; >+ struct macsec_dev *macsec; >+ >+ macsec = macsec_priv(dev); >+ offload = nla_get_u8(data[IFLA_MACSEC_OFFLOAD]); >+ >+ if (macsec->offload == offload) >+ return 0; >+ >+ /* Check if the offloading mode is supported by the underlying layers */ >+ if (offload != MACSEC_OFFLOAD_OFF && >+ !macsec_check_offload(offload, macsec)) >+ return -EOPNOTSUPP; >+ >+ /* Check if the net device is busy. */ >+ if (netif_running(dev)) >+ return -EBUSY; >+ >+ return macsec_update_offload(macsec, offload); >+} >+ > static int macsec_changelink(struct net_device *dev, struct nlattr *tb[], > struct nlattr *data[], > struct netlink_ext_ack *extack) >@@ -3831,6 +3852,12 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[], > if (ret) > goto cleanup; > >+ if (data[IFLA_MACSEC_OFFLOAD]) { >+ ret = macsec_changelink_upd_offload(dev, data); >+ if (ret) >+ goto cleanup; >+ } >+ > /* If h/w offloading is available, propagate to the device */ > if (macsec_is_offloaded(macsec)) { > const struct macsec_ops *ops; >@@ -4231,16 +4258,22 @@ static size_t macsec_get_size(const struct net_device *dev) > nla_total_size(1) + /* IFLA_MACSEC_SCB */ > nla_total_size(1) + /* IFLA_MACSEC_REPLAY_PROTECT */ > nla_total_size(1) + /* IFLA_MACSEC_VALIDATION */ >+ nla_total_size(1) + /* IFLA_MACSEC_OFFLOAD */ > 0; > } > > static int macsec_fill_info(struct sk_buff *skb, > const struct net_device *dev) > { >- struct macsec_secy *secy = &macsec_priv(dev)->secy; >- struct macsec_tx_sc *tx_sc = &secy->tx_sc; >+ struct macsec_tx_sc *tx_sc; >+ struct macsec_dev *macsec; >+ struct macsec_secy *secy; > u64 csid; > >+ macsec = macsec_priv(dev); >+ secy = &macsec->secy; >+ tx_sc = &secy->tx_sc; >+ > switch (secy->key_len) { > case MACSEC_GCM_AES_128_SAK_LEN: > csid = secy->xpn ? MACSEC_CIPHER_ID_GCM_AES_XPN_128 : MACSEC_DEFAULT_CIPHER_ID; >@@ -4265,6 +4298,7 @@ static int macsec_fill_info(struct sk_buff *skb, > nla_put_u8(skb, IFLA_MACSEC_SCB, tx_sc->scb) || > nla_put_u8(skb, IFLA_MACSEC_REPLAY_PROTECT, secy->replay_protect) || > nla_put_u8(skb, IFLA_MACSEC_VALIDATION, secy->validate_frames) || >+ nla_put_u8(skb, IFLA_MACSEC_OFFLOAD, macsec->offload) || > 0) > goto nla_put_failure; > >-- >2.21.3 >