Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2030351pxj; Wed, 19 May 2021 21:17:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwxovV5gzLyzFeIUGc/h9KRFqH8CVCVWbGdnodZCZlC2PVClZiTjBZ16ACBS5gIe0u9BTcX X-Received: by 2002:a17:906:5211:: with SMTP id g17mr2608770ejm.281.1621484222400; Wed, 19 May 2021 21:17:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621484222; cv=none; d=google.com; s=arc-20160816; b=QCeyfKEmrsX/xtJqwQI5PnRaSjYxlagumtTUbaxWrPZE7nMfVONGP2RgQaXV0HUfOD 9wsVcZnBisl40nExK7tOL/dd10/Ec5bE2qAwKfBvJ0PNDXRKj2050ML9Sw5+vUDrgF4a tLMIhH7V7jpKxV+Yuw0Irr1aCr/c1kHaEoGCSUWpaLCt3nUJIe5BuNkw9x1rsS4K0S/N ddYq0PG915r2cc0PwGsQ35yQW/VgTjSqQ1tc+yJoB4Eq9bYlQ4PFvgSP/3HOn2TMcAK4 gs033zYuxmN/+2R04yI613Y/VMARjjYpOYptjpARrUVP74v/4jckVorHswo2h8kDJII9 MlQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:organization:references:in-reply-to:date:cc:to:from :subject:message-id; bh=JUvEjQMcQl7Hc+frqi59YtGspm87lB8qpi8FOoLpUGE=; b=oQV2l0ukqDH57ngcmMRoESi3bhFxQaAz+wUTlWW1s+yQW/PhrbVCncOsZbgnKoXomS 6qlJyIyaT15OOLsK/YF8TmU0GdgYwnBnry8okgEMLmXzl3t1fGNa9qsFCV1asnHSLKv4 9jehDh8EkLUAnK/Ge0+LjC7tF1G48RG1BChCRQtFQ/ImIlLIIS2yhv5fk3L2Ihd3zTXZ tTSky96EYRqyZPXAhA+v8WQekmAH3slHB/EnRGRlPEfhc2b9BVX3cm6oyW3KhQ5plHbD 04BH1U1N4X2UfO36kJbKzbfQ7pEcBKLksljehxplZ+HH4y2MaaYbQW+e/bfGxiepUxgg EyZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hs11si1974055ejc.583.2021.05.19.21.16.36; Wed, 19 May 2021 21:17:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229536AbhETEOn (ORCPT + 99 others); Thu, 20 May 2021 00:14:43 -0400 Received: from cloud48395.mywhc.ca ([173.209.37.211]:48442 "EHLO cloud48395.mywhc.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229436AbhETEOn (ORCPT ); Thu, 20 May 2021 00:14:43 -0400 Received: from modemcable064.203-130-66.mc.videotron.ca ([66.130.203.64]:45864 helo=[192.168.1.177]) by cloud48395.mywhc.ca with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1lja3l-0007Hw-2N; Thu, 20 May 2021 00:13:21 -0400 Message-ID: Subject: Re: [PATCH] io_thread/x86: don't reset 'cs', 'ss', 'ds' and 'es' registers for io_threads From: Olivier Langlois To: Jens Axboe , Linus Torvalds Cc: Stefan Metzmacher , Thomas Gleixner , Andy Lutomirski , Linux Kernel Mailing List , io-uring , the arch/x86 maintainers Date: Thu, 20 May 2021 00:13:19 -0400 In-Reply-To: <3df541c3-728c-c63d-eaeb-a4c382e01f0b@kernel.dk> References: <8735v3ex3h.ffs@nanos.tec.linutronix.de> <3C41339D-29A2-4AB1-958F-19DB0A92D8D7@amacapital.net> <8735v3jujv.ffs@nanos.tec.linutronix.de> <12710fda-1732-ee55-9ac1-0df9882aa71b@samba.org> <59ea3b5a-d7b3-b62e-cc83-1f32a83c4ac2@kernel.dk> <17471c9fec18765449ef3a5a4cddc23561b97f52.camel@trillion01.com> <3df541c3-728c-c63d-eaeb-a4c382e01f0b@kernel.dk> Organization: Trillion01 Inc Content-Type: text/plain; charset="ISO-8859-1" User-Agent: Evolution 3.40.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - cloud48395.mywhc.ca X-AntiAbuse: Original Domain - vger.kernel.org X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - trillion01.com X-Get-Message-Sender-Via: cloud48395.mywhc.ca: authenticated_id: olivier@trillion01.com X-Authenticated-Sender: cloud48395.mywhc.ca: olivier@trillion01.com X-Source: X-Source-Args: X-Source-Dir: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jens, On Wed, 2021-05-12 at 14:55 -0600, Jens Axboe wrote: > > > Jens, have you played with core-dumping when there are active > > io_uring > > threads? There's a test-program in that github issue report.. > > Yes, I also did that again after the report, and did so again right now > just to verify. I'm not seeing any issues with coredumps being > generated > if the app crashes, or if I send it SIGILL, for example... I also just > now tried Olivier's test case, and it seems to dump just fine for me. > > I then tried backing out the patch from Stefan, and it works fine with > that reverted too. So a bit puzzled as to what is going on here... > > Anyway, I'll check in on that github thread and see if we can narrow > this down. > I know that my test case isn't conclusive. It is a failed attempt to capture what my program is doing. The priority of investigating my core dump issue has substantially dropped last week because I did solve my primary issue (A buffer leak in the provided buffers to io_uring during disconnection). My program did run for days but it did crash morning without any core dump again. It is a very frustrating situation because it would probably be a bug trivial to diagnostic and fix but without the core, the logs are opaque and they just don't give no clue about why the program did crash. A key characteristic of my program, it is that it generates at least 1 io-worker thread per io_uring instance. Oddly enough, I am having a hard time recreating a test case that will generate io-worker threads. My first attempt was with the github issue test-case. I have kept tweaking it and I know that I will find the right sequence to get io- worker threads spawned. I suspect that once you meet that condition, it might be sufficient to trigger the core dump generation problem. I have also tried to run benchmark io_uring with https://github.com/frevib/io_uring-echo-server/blob/io-uring-feat-fast-poll/benchmarks/benchmarks.md (If you give it a try, make sure you erase its private out-of-date liburing copy before compiling it...) This didn't generate any io-worker thread neither. In a nutshell here is what my program does for most of its 85-86 sockets: 1. create TCP socket 2. Set O_NONBLOCK to it 3. Call connect() 4. Use IORING_OP_POLL_ADD with POLLOUT to be notified when the connection completes 5. Once connection is completed, clear the socket O_NONBLOCK flag, use IORING_OP_WRITE to send a request 6. Submit a IORING_OP_READ with IOSQE_BUFFER_SELECT to read server reply asynchronously. Here are 2 more notes about the sequence: a) If you wonder about the flip-flop about blocking and non-nblocking, it is because I have adapated existing code to use io_uring. To minimize the required code change, I left untouched the non-blocking connection code. b) If I add IOSQE_ASYNC to the IORING_OP_READ, io_uring will generate a lot of io-worker threads. I mean a lot... You can see here: https://github.com/axboe/liburing/issues/349 So what I am currently doing is to tweak my test-case to emulate as much as possible the described sequence to have some io-worker threads spawn and then force a core dump to validate that it is the io-worker thread presence that is causing the core dump generation issue (or not!) Quick question to the devs: Is there any example program bundled with liburing that is creating some io-workers thread in a sure way? Greetings, Olivier