Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752234AbcCRWXO (ORCPT ); Fri, 18 Mar 2016 18:23:14 -0400 Received: from mail.kernel.org ([198.145.29.136]:60203 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750782AbcCRWXJ (ORCPT ); Fri, 18 Mar 2016 18:23:09 -0400 MIME-Version: 1.0 X-Originating-IP: [71.202.123.143] In-Reply-To: <1458308084-268272-1-git-send-email-arnd@arndb.de> References: <1458308084-268272-1-git-send-email-arnd@arndb.de> From: Jesse Gross Date: Fri, 18 Mar 2016 15:22:50 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [ovs-dev] [PATCH] openvswitch: reduce padding in struct sw_flow_key To: Arnd Bergmann Cc: Pravin Shelar , "David S. Miller" , ovs dev , Linux Kernel Network Developers , Jiri Benc , linux-kernel@vger.kernel.org, Joe Stringer , Justin Pettit Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1424 Lines: 24 On Fri, Mar 18, 2016 at 6:34 AM, Arnd Bergmann wrote: > This means it's still too large really, we just don't warn about it any more, > and will get the warning again once another member is added. My patch is a > band-aid at best, but more work is needed here. One problem is that > ovs_flow_cmd_new and ovs_flow_cmd_set have two copies of struct sw_flow_key on > the stack, one of them nested within sw_flow_mask. If we could reduce that to > a single instance, the stack usage would be cut in half here. I think it should be possible to reduce those two functions to only use a single instance of the structure. For new flows, we're already allocating the key structure on the heap, it seems like we could just translate the key into that rather than to intermediate location on the stack. And when setting flows, I'm not sure that we really need the mask at all - we don't do anything with it other than check it against the actions but we really should be using the original mask for that rather than the new one. It seems like it would be preferable to go down that route rather than this patch, since as you say, this patch is really only suppressing the warning and doesn't have a significant impact on the actual stack consumption. Plus, the ordering of the flow layout affects how much data needs to be compared during packet lookup, so this change will likely be bad for forwarding performance.