Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934678AbaGPQR5 (ORCPT ); Wed, 16 Jul 2014 12:17:57 -0400 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:40846 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934480AbaGPQRz (ORCPT ); Wed, 16 Jul 2014 12:17:55 -0400 Date: Wed, 16 Jul 2014 17:17:15 +0100 From: Will Deacon To: Zi Shen Lim Cc: Catalin Marinas , Jiang Liu , AKASHI Takahiro , "David S. Miller" , Daniel Borkmann , Alexei Starovoitov , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "netdev@vger.kernel.org" Subject: Re: [PATCH RFCv3 08/14] arm64: introduce aarch64_insn_gen_movewide() Message-ID: <20140716161715.GU29414@arm.com> References: <1405405512-4423-1-git-send-email-zlim.lnx@gmail.com> <1405405512-4423-9-git-send-email-zlim.lnx@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1405405512-4423-9-git-send-email-zlim.lnx@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 15, 2014 at 07:25:06AM +0100, Zi Shen Lim wrote: > Introduce function to generate move wide (immediate) instructions. [...] > +u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst, > + int imm, int shift, > + enum aarch64_insn_variant variant, > + enum aarch64_insn_movewide_type type) > +{ > + u32 insn; > + > + switch (type) { > + case AARCH64_INSN_MOVEWIDE_ZERO: > + insn = aarch64_insn_get_movz_value(); > + break; > + case AARCH64_INSN_MOVEWIDE_KEEP: > + insn = aarch64_insn_get_movk_value(); > + break; > + case AARCH64_INSN_MOVEWIDE_INVERSE: > + insn = aarch64_insn_get_movn_value(); > + break; > + default: > + BUG_ON(1); > + } > + > + BUG_ON(imm < 0 || imm > 65535); Do this check with masking instead? > + > + switch (variant) { > + case AARCH64_INSN_VARIANT_32BIT: > + BUG_ON(shift != 0 && shift != 16); > + break; > + case AARCH64_INSN_VARIANT_64BIT: > + insn |= BIT(31); > + BUG_ON(shift != 0 && shift != 16 && shift != 32 && > + shift != 48); Would be neater as a nested switch, perhaps? If you reorder the outer-switch, you could probably fall-through too and combine the shift checks. Will -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/