Received: by 10.192.165.156 with SMTP id m28csp721963imm; Mon, 16 Apr 2018 07:42:33 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+ryDaYUvY3O8hRMqlsW6tqs4k6enK24yIKFz7mbw7INMNjhGlRsTE/2QL6VAbnQPAxTt6J X-Received: by 2002:a17:902:b187:: with SMTP id s7-v6mr15809021plr.170.1523889753766; Mon, 16 Apr 2018 07:42:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523889753; cv=none; d=google.com; s=arc-20160816; b=PqyzVTedRkGTtyMuwyKBVigANKluyPJakEL/Nz7j+6BJOnKkZmKRqAvp1i+xm3xfkF kfPpngz/SYkY9K9ORBb9dX0zhdBD7jr6IdA8GZjGG865IIYgIcbEkr4fHSsqGqJUE8tw xDQfo3Qt4O+putJ4VDKXTOwIDPETsmgPRgrcSYiuYD+kG/kMuNnYIWDPUOWwkv6iyhOQ NpyDCXaaIpqaLxXsrIhzMT67i3mglSNRVKmsj5tt8QyE2qnfzlBEkrqo6ofa+mjfu8Xu J4Vz9k6/YFzt+4CcKTBXvymjzlKnt1jYI+AUaJjK0XFhXROUmidDb458PKnyDfXpKM0X XGYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=L1eDSOEaGOnXydeRqY9Yrs5cOO3vBMqdgwJ6xmI36K8=; b=z6Z/522JKS3iJFD9WbrzXyzFg9Dueawg5XAn/iiTE7frSOCpTA85VBSq/c0Lcz3dDb /gh0bkiStl+4tibyy3osLAE9fXmdQgOPPEPDRkMIBq8+RcJLHoP2gjRocfJGhGTESEW1 x0KOgxB1t1mEr5VGJTSS3MowWwiZC8oTw4B5TvT0qFWIVgQd8E2M8dCByeyZ15H8u4HU X5Tp4rveCN50uMjAL9LSThEO8cyzQj5YMHA1mwWO0EbZb8+u5+gZ19gnJ4rJUeYW6CIB 2htWu33oMgnlR725RKuOfnmeeFIo8voQTL6UeAr3eCyyPODTLTRIJGRf6SxSJF7Y+OUw wEfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c1-v6si7422904pld.585.2018.04.16.07.42.19; Mon, 16 Apr 2018 07:42:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753027AbeDPOkq (ORCPT + 99 others); Mon, 16 Apr 2018 10:40:46 -0400 Received: from mx2.suse.de ([195.135.220.15]:56438 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751927AbeDPOkn (ORCPT ); Mon, 16 Apr 2018 10:40:43 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id BE8B1AC25; Mon, 16 Apr 2018 14:40:41 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 2E3D61E0A79; Mon, 16 Apr 2018 16:40:41 +0200 (CEST) Date: Mon, 16 Apr 2018 16:40:41 +0200 From: Jan Kara To: Guillaume Morin Cc: Pavlos Parissis , stable@vger.kernel.org, decui@microsoft.com, jack@suse.com, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, mszeredi@redhat.com Subject: Re: kernel panics with 4.14.X versions Message-ID: <20180416144041.t2mt7ugzwqr56ka3@quack2.suse.cz> References: <20180416132550.d25jtdntdvpy55l3@bender.morinfr.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180416132550.d25jtdntdvpy55l3@bender.morinfr.org> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 16-04-18 15:25:50, Guillaume Morin wrote: > Fwiw, there have been already reports of similar soft lockups in > fsnotify() on 4.14: https://lkml.org/lkml/2018/3/2/1038 > > We have also noticed similar softlockups with 4.14.22 here. Yeah. > On 16 Apr 13:54, Pavlos Parissis wrote: > > > > Hi all, > > > > We have observed kernel panics on several master kubernetes clusters, where we run > > kubernetes API services and not application workloads. > > > > Those clusters use kernel version 4.14.14 and 4.14.32, but we switched everything > > to kernel version 4.14.32 as a way to address the issue. > > > > We have HP and Dell hardware on those clusters, and network cards are also different, > > we have bnx2x and mlx5_core in use. > > > > We also run kernel version 4.14.32 on different type of workloads, software load > > balancing using HAProxy, and we don't have any crashes there. > > > > Since the crash happens on different hardware, we think it could be a kernel issue, > > but we aren't sure about it. Thus, I am contacting kernel people in order to get some > > hint, which can help us to figure out what causes this. > > > > In our kubernetes clusters, we have instructed the kernel to panic upon soft lockup, > > we use 'kernel.softlockup_panic=1', 'kernel.hung_task_panic=1' and 'kernel.watchdog_thresh=10'. > > Thus, we see the stack traces. Today, we have disabled this, later I will explain why. > > > > I believe we have two discint types of panics, one is trigger upon soft lockup and another one > > where the call trace is about scheduler("sched: Unexpected reschedule of offline CPU#8!) > > > > > > Let me walk you through the kernel panics and some observations. > > > > The followin series of stack traces are happening when one CPU (CPU 24) is stuck for ~22 seconds. > > watchdog_thresh is set to 10 and as far as I remember softlockup threshold is (2 * watchdog_thresh), > > so it makes sense to see the kernel crashing after ~20seconds. > > > > After the stack trace, we have the output of sar for CPU#24 and we see that just before the > > crash CPU utilization for system level went to 100%. Now let's move to another panic. > > > > [373782.361064] watchdog: BUG: soft lockup - CPU#24 stuck for 22s! [kube-apiserver:24261] > > [373782.378225] Modules linked in: binfmt_misc sctp_diag sctp dccp_diag dccp tcp_diag udp_diag > > inet_diag unix_diag cfg80211 rfkill dell_rbu 8021q garp mrp xfs libcrc32c loop x86_pkg_temp_thermal > > intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel > > pcbc aesni_intel vfat fat crypto_simd glue_helper cryptd intel_cstate intel_rapl_perf iTCO_wdt ses > > iTCO_vendor_support mxm_wmi ipmi_si dcdbas enclosure mei_me pcspkr ipmi_devintf lpc_ich sg mei > > ipmi_msghandler mfd_core shpchp wmi acpi_power_meter netconsole nfsd auth_rpcgss nfs_acl lockd grace > > sunrpc ip_tables ext4 mbcache jbd2 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt > > fb_sys_fops sd_mod ttm crc32c_intel ahci libahci mlx5_core drm mlxfw mpt3sas ptp libata raid_class > > pps_core scsi_transport_sas > > [373782.516807] dm_mirror dm_region_hash dm_log dm_mod dax > > [373782.531739] CPU: 24 PID: 24261 Comm: kube-apiserver Not tainted 4.14.32-1.el7.x86_64 #1 > > [373782.549848] Hardware name: Dell Inc. PowerEdge R630/02C2CP, BIOS 2.4.3 01/17/2017 > > [373782.567486] task: ffff882f66d28000 task.stack: ffffc9002120c000 > > [373782.583441] RIP: 0010:fsnotify+0x197/0x510 > > [373782.597319] RSP: 0018:ffffc9002120fdb8 EFLAGS: 00000286 ORIG_RAX: ffffffffffffff10 > > [373782.615308] RAX: 0000000000000000 RBX: ffff882f9ec65c20 RCX: 0000000000000002 > > [373782.632950] RDX: 0000000000028700 RSI: 0000000000000002 RDI: ffffffff8269a4e0 > > [373782.650616] RBP: ffffc9002120fe98 R08: 0000000000000000 R09: 0000000000000000 > > [373782.668287] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 > > [373782.685918] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 > > [373782.703302] FS: 000000c42009f090(0000) GS:ffff882fbf900000(0000) knlGS:0000000000000000 > > [373782.721887] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > [373782.737741] CR2: 00007f82b6539244 CR3: 0000002f3de2a005 CR4: 00000000003606e0 > > [373782.755247] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > [373782.772722] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > > [373782.790043] Call Trace: > > [373782.802041] vfs_write+0x151/0x1b0 > > [373782.815081] ? syscall_trace_enter+0x1cd/0x2b0 > > [373782.829175] SyS_write+0x55/0xc0 > > [373782.841870] do_syscall_64+0x79/0x1b0 > > [373782.855073] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 Can you please run RIP through ./scripts/faddr2line to see where exactly are we looping? I expect the loop iterating over marks to notify but better be sure. How easily can you hit this? Are you able to run debug kernels / inspect crash dumps when the issue occurs? Also testing with the latest mainline kernel (4.16) would be welcome whether this isn't just an issue with the backport of fsnotify fixes from Miklos. Honza -- Jan Kara SUSE Labs, CR