Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp444229iog; Mon, 13 Jun 2022 06:07:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw4/n/EugoYu18StNkS91U4RD1cJihqkg0XHk09WqIluuQSQyDHG9rJ3jRCt1Z3kkasGtQw X-Received: by 2002:a63:5706:0:b0:3fc:a31b:9083 with SMTP id l6-20020a635706000000b003fca31b9083mr50929044pgb.333.1655125525802; Mon, 13 Jun 2022 06:05:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655125525; cv=none; d=google.com; s=arc-20160816; b=g0S5pkxW7+3acl+bQo3/7Bo5O2LJKF3NTedBgJQgUWLDMIanqFjab/QHa1LB7fISLQ /ZIbgcelsHk5YEzDahpNbHSlsp1MrZ7Vho+Ri4M6y+8pJOObtvmn6dWFsVOikO6xudjj seaSl/5L6wOxSBmt+OM9XXPHKxfl8VuanU0H72utIhAVRu0VrdIG9FQBqznJgJ5uWeg+ dFfYUmBUyv3774Odt4+ibxnc7FonH6f4aSs6rAbWMmfpAZOtaAlOyo6vBqxQsCpLHOzT oxMArkjJS5kpMybfGWF5m09XH1ktp4g3ZQd9Kdm5Yc+fOR6kQ6cJrRu4nXSfuTjlcfKc zNZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=dwg22I4Xifa8/6SfpngHbR+ycY2IZ5trqGm97CCSCdk=; b=NeYGyvl88KX/UhOSFSw42MelBDAdenGgUmPCzKJ0zuSUeuUS7rpBbXGu9n8QTuCVIP cI6llUiYYCUqEzGIEpofj7byVdGdQOBjjzZKPGMeQ5VuOIfPJ8qxIfk8dEkKkpsSIswC xjDIkCTMaitNupW9ws0J+DPlvlohKyu1K0CtS83/ho+Xvu2dHKObLLnEZJcynYfxRdN3 hhhSCAIzbyXP+Ggv/smPZwsxqU2I+x14ihHcDXQVTrjE5sf9n8o0LwcbSEblCbR/Zomg /wRptIFuRLV105HBgOZXTDq5Z0wY8B3Dm/XCET3218MRvGQkhX6oUSR2CEcX/BMSwp8S ol9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=YxUXXoG0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y64-20020a633243000000b003f5ccb17340si9834153pgy.571.2022.06.13.06.05.06; Mon, 13 Jun 2022 06:05:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=YxUXXoG0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353452AbiFML1O (ORCPT + 99 others); Mon, 13 Jun 2022 07:27:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351543AbiFMLRV (ORCPT ); Mon, 13 Jun 2022 07:17:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A5A138D97; Mon, 13 Jun 2022 03:40:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 077A7611E1; Mon, 13 Jun 2022 10:40:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E676AC34114; Mon, 13 Jun 2022 10:40:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1655116824; bh=YJZuX6ju7TeGUViIRwze1kYj5SGNvVfn1hkK+PTuGUc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YxUXXoG0M6WjIfFlkcyipnV6JIR0cJwM99XAu40HtfHgkxrgXlokDJotvQtkuEQRM k+5IOCbVp7kJqENKr/JH8oXn6MoVoSX0lLt9GW7qxxZ9WENQ0qPgo8pTizFBrUfKf0 3uTcqg9Zz/xWJtIyF90Q8aaOIDAUcFFuT1Ni74hY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sebastian Andrzej Siewior , Herbert Xu , Sasha Levin Subject: [PATCH 5.4 186/411] crypto: cryptd - Protect per-CPU resource by disabling BH. Date: Mon, 13 Jun 2022 12:07:39 +0200 Message-Id: <20220613094934.237155067@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220613094928.482772422@linuxfoundation.org> References: <20220613094928.482772422@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sebastian Andrzej Siewior [ Upstream commit 91e8bcd7b4da182e09ea19a2c73167345fe14c98 ] The access to cryptd_queue::cpu_queue is synchronized by disabling preemption in cryptd_enqueue_request() and disabling BH in cryptd_queue_worker(). This implies that access is allowed from BH. If cryptd_enqueue_request() is invoked from preemptible context _and_ soft interrupt then this can lead to list corruption since cryptd_enqueue_request() is not protected against access from soft interrupt. Replace get_cpu() in cryptd_enqueue_request() with local_bh_disable() to ensure BH is always disabled. Remove preempt_disable() from cryptd_queue_worker() since it is not needed because local_bh_disable() ensures synchronisation. Fixes: 254eff771441 ("crypto: cryptd - Per-CPU thread implementation...") Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- crypto/cryptd.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 927760b316a4..43a1a855886b 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -39,6 +39,10 @@ struct cryptd_cpu_queue { }; struct cryptd_queue { + /* + * Protected by disabling BH to allow enqueueing from softinterrupt and + * dequeuing from kworker (cryptd_queue_worker()). + */ struct cryptd_cpu_queue __percpu *cpu_queue; }; @@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue) static int cryptd_enqueue_request(struct cryptd_queue *queue, struct crypto_async_request *request) { - int cpu, err; + int err; struct cryptd_cpu_queue *cpu_queue; refcount_t *refcnt; - cpu = get_cpu(); + local_bh_disable(); cpu_queue = this_cpu_ptr(queue->cpu_queue); err = crypto_enqueue_request(&cpu_queue->queue, request); refcnt = crypto_tfm_ctx(request->tfm); if (err == -ENOSPC) - goto out_put_cpu; + goto out; - queue_work_on(cpu, cryptd_wq, &cpu_queue->work); + queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work); if (!refcount_read(refcnt)) - goto out_put_cpu; + goto out; refcount_inc(refcnt); -out_put_cpu: - put_cpu(); +out: + local_bh_enable(); return err; } @@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work) cpu_queue = container_of(work, struct cryptd_cpu_queue, work); /* * Only handle one request at a time to avoid hogging crypto workqueue. - * preempt_disable/enable is used to prevent being preempted by - * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent - * cryptd_enqueue_request() being accessed from software interrupts. */ local_bh_disable(); - preempt_disable(); backlog = crypto_get_backlog(&cpu_queue->queue); req = crypto_dequeue_request(&cpu_queue->queue); - preempt_enable(); local_bh_enable(); if (!req) -- 2.35.1