Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp3187108pxb; Tue, 20 Apr 2021 02:32:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7P9TUMEWi2CPIsbgCx3lAOY32LkB4PfzCf7EFnI08gBHD4H7AS/kZdjiL6R7jwFifS2gI X-Received: by 2002:a63:5066:: with SMTP id q38mr16331592pgl.119.1618911170385; Tue, 20 Apr 2021 02:32:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618911170; cv=none; d=google.com; s=arc-20160816; b=XKm7qs7le7N5oS7A/tHzrRcL7RQLJ3UylEJK2evr4OaNN9tL+YxAuUpbKKp7uVM9vO p3qsYW29ChxrQKTF1sZ6xwEA9gxaxGy/7moSf9SjLjZluzK48rILe65wmVPdmbI56jcf y7XyYTJDCcefJMDo1e2W2W+qlNTBD/ErHcOHjYIa18u0deERw37X68HOglhj8yKjRLep oQG8nwtw5e1WXsMG672bEoU6/4VQ+KlfRdTRy/7vtl7O02yT5Wu6P2BE40kdZYa1sVOe g/m4MdiziDpwubxUYTLdWxHK+/jEcHruZ5q1CCNBc5Skt6ekOuAaWrixLDNqxt6gEOoC GmrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=18C/p49aePuANugYrk1JFsP03OKBgzLm8htKV+f/SOM=; b=WI2vIrV25l2mpliBruAuy5QYs3fj1e4wTiR4WHIKiwBXqkGQykaV0MrbGWKJ+bpBa1 3/uFZFDHC7Y/FKW1swRKOF/+4DrRjoiibYl46c4ASJ27Q4ynIjebD4Jh1ItZWDJ7OUUg vfBwSgc9yevaXibhCnArzaUo1FJlZYy8nuo3NWYLmqI/uVUF29aKwKyjHkXy2JMYAUuC NDU6CHmOhXTupL2JQtvUSwcm9pjBKV3JXH81yw/K0NdxbhxwuVllY1Sl6FVJReVhe4Tv k6iNMORop+iL4JkE5PoXZMH8OM7F5xN0kOR5RzrxSI59uFIWf+wYQiSkz0k6gXxaMq1C IzLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OLFXTo5W; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f206si17134528pfa.137.2021.04.20.02.32.37; Tue, 20 Apr 2021 02:32:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=OLFXTo5W; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229761AbhDTJca (ORCPT + 99 others); Tue, 20 Apr 2021 05:32:30 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:23049 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229729AbhDTJca (ORCPT ); Tue, 20 Apr 2021 05:32:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1618911118; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=18C/p49aePuANugYrk1JFsP03OKBgzLm8htKV+f/SOM=; b=OLFXTo5WUsFgSF+81woMWAFyPq9AQatzj5uyIPLfk8PcfcBYkvCytqG5dTTLJ3lpDK0+AJ 6bz5v3gDBtNEKBH7EjCLvqBjia3843uM97lgq/aeBVqHbtrHAEMHM62ue5cu+4YqkBKT33 EUlsP8jDzKjqQXPFaKHXgsVz8Xd7L0I= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-90-vTGn_hWfMk2DtkeHR6fzyQ-1; Tue, 20 Apr 2021 05:31:56 -0400 X-MC-Unique: vTGn_hWfMk2DtkeHR6fzyQ-1 Received: by mail-ej1-f71.google.com with SMTP id g7-20020a1709065d07b029037c872d9cdcso4562225ejt.11 for ; Tue, 20 Apr 2021 02:31:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=18C/p49aePuANugYrk1JFsP03OKBgzLm8htKV+f/SOM=; b=J3ZrIoK5Di1my5pioDzg8ohHT3adFqtuGyPY4vhH2vV2LAz4+jD/kKeQih+xIoWx0g 4o5cE7xve1c6Y8qb5+dkbq5TMIbLOrlB8edY4eJz8BXCbzA6MSKn83ZohsyQ1Ei6o/cP 4MHXcKz+dkVTq9J/MwHxOKsIQA9L7Og+cynSQ2ysAL0rQzseiSWWQl2cANaF+fIjqq+8 K8IC3KxflmNPHHeaZCr63K4IRkRqQv5fvZgS6jvcrIgM5Mdep8ysAvfRfeccbD6YPvPO T3VnsRUWd8DmV0I1We4JkgoXh1rM9EhHaRlPTFPSqyCDOUF6tkyEnXnMmov1QO6goqt5 t80g== X-Gm-Message-State: AOAM531RKcfZ6CLKpGZSSRiCmmWow3zCgaCrEIv4HIDurnmWgBBTNSqS FazlLJREtIYWqWZqvBS4RYlvbJiQArV296YtsNe6g7gZFXcld+wCVtNLS9PjmhOqte+czYIhnlP uVx0NGjEPLWffx6ipx+vYAYGd X-Received: by 2002:aa7:c456:: with SMTP id n22mr30101153edr.255.1618911115744; Tue, 20 Apr 2021 02:31:55 -0700 (PDT) X-Received: by 2002:aa7:c456:: with SMTP id n22mr30101141edr.255.1618911115583; Tue, 20 Apr 2021 02:31:55 -0700 (PDT) Received: from vitty.brq.redhat.com (g-server-2.ign.cz. [91.219.240.2]) by smtp.gmail.com with ESMTPSA id bh14sm12070314ejb.104.2021.04.20.02.31.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Apr 2021 02:31:55 -0700 (PDT) From: Vitaly Kuznetsov To: Michael Kelley Cc: mikelley@microsoft.com, kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, decui@microsoft.com Subject: Re: ** POTENTIAL FRAUD ALERT - RED HAT ** [PATCH v2 1/1] Drivers: hv: vmbus: Increase wait time for VMbus unload In-Reply-To: <1618894089-126662-1-git-send-email-mikelley@microsoft.com> References: <1618894089-126662-1-git-send-email-mikelley@microsoft.com> Date: Tue, 20 Apr 2021 11:31:54 +0200 Message-ID: <87tuo1i9o5.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michael Kelley writes: > When running in Azure, disks may be connected to a Linux VM with > read/write caching enabled. If a VM panics and issues a VMbus > UNLOAD request to Hyper-V, the response is delayed until all dirty > data in the disk cache is flushed. In extreme cases, this flushing > can take 10's of seconds, depending on the disk speed and the amount > of dirty data. If kdump is configured for the VM, the current 10 second > timeout in vmbus_wait_for_unload() may be exceeded, and the UNLOAD > complete message may arrive well after the kdump kernel is already > running, causing problems. Note that no problem occurs if kdump is > not enabled because Hyper-V waits for the cache flush before doing > a reboot through the BIOS/UEFI code. > > Fix this problem by increasing the timeout in vmbus_wait_for_unload() > to 100 seconds. Also output periodic messages so that if anyone is > watching the serial console, they won't think the VM is completely > hung. > > Fixes: 911e1987efc8 ("Drivers: hv: vmbus: Add timeout to vmbus_wait_for_unload") > Signed-off-by: Michael Kelley > --- > > Changed in v2: Fixed silly error in the argument to mdelay() > > --- > drivers/hv/channel_mgmt.c | 30 +++++++++++++++++++++++++----- > 1 file changed, 25 insertions(+), 5 deletions(-) > > diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c > index f3cf4af..ef4685c 100644 > --- a/drivers/hv/channel_mgmt.c > +++ b/drivers/hv/channel_mgmt.c > @@ -755,6 +755,12 @@ static void init_vp_index(struct vmbus_channel *channel) > free_cpumask_var(available_mask); > } > > +#define UNLOAD_DELAY_UNIT_MS 10 /* 10 milliseconds */ > +#define UNLOAD_WAIT_MS (100*1000) /* 100 seconds */ > +#define UNLOAD_WAIT_LOOPS (UNLOAD_WAIT_MS/UNLOAD_DELAY_UNIT_MS) > +#define UNLOAD_MSG_MS (5*1000) /* Every 5 seconds */ > +#define UNLOAD_MSG_LOOPS (UNLOAD_MSG_MS/UNLOAD_DELAY_UNIT_MS) > + > static void vmbus_wait_for_unload(void) > { > int cpu; > @@ -772,12 +778,17 @@ static void vmbus_wait_for_unload(void) > * vmbus_connection.unload_event. If not, the last thing we can do is > * read message pages for all CPUs directly. > * > - * Wait no more than 10 seconds so that the panic path can't get > - * hung forever in case the response message isn't seen. > + * Wait up to 100 seconds since an Azure host must writeback any dirty > + * data in its disk cache before the VMbus UNLOAD request will > + * complete. This flushing has been empirically observed to take up > + * to 50 seconds in cases with a lot of dirty data, so allow additional > + * leeway and for inaccuracies in mdelay(). But eventually time out so > + * that the panic path can't get hung forever in case the response > + * message isn't seen. I vaguely remember debugging cases when CHANNELMSG_UNLOAD_RESPONSE never arrives, it was kind of pointless to proceed to kexec as attempts to reconnect Vmbus devices were failing (no devices were offered after CHANNELMSG_REQUESTOFFERS AFAIR). Would it maybe make sense to just do emergency reboot instead of proceeding to kexec when this happens? Just wondering. > */ > - for (i = 0; i < 1000; i++) { > + for (i = 1; i <= UNLOAD_WAIT_LOOPS; i++) { > if (completion_done(&vmbus_connection.unload_event)) > - break; > + goto completed; > > for_each_online_cpu(cpu) { > struct hv_per_cpu_context *hv_cpu > @@ -800,9 +811,18 @@ static void vmbus_wait_for_unload(void) > vmbus_signal_eom(msg, message_type); > } > > - mdelay(10); > + /* > + * Give a notice periodically so someone watching the > + * serial output won't think it is completely hung. > + */ > + if (!(i % UNLOAD_MSG_LOOPS)) > + pr_notice("Waiting for VMBus UNLOAD to complete\n"); > + > + mdelay(UNLOAD_DELAY_UNIT_MS); > } > + pr_err("Continuing even though VMBus UNLOAD did not complete\n"); > > +completed: > /* > * We're crashing and already got the UNLOAD_RESPONSE, cleanup all > * maybe-pending messages on all CPUs to be able to receive new This is definitely an improvement, Reviewed-by: Vitaly Kuznetsov -- Vitaly