Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753741AbdLNQSQ (ORCPT ); Thu, 14 Dec 2017 11:18:16 -0500 Received: from sonic304-17.consmr.mail.bf2.yahoo.com ([74.6.128.40]:37849 "EHLO sonic304-17.consmr.mail.bf2.yahoo.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753703AbdLNQSN (ORCPT ); Thu, 14 Dec 2017 11:18:13 -0500 X-YMail-OSG: BjeqoaIVM1nZjqdiOHclFjG3WYYd0UI8SOAsoecRsOFIfoWWJJGh0th.Ai.Ok5t ACN8VxiXU5UXDOYU5VNoH8xldhNO5bNJrSA5YtFt_r1jM4YfAVDJ2F2bjy54.3nkUkON2b39K5fS X_0mvG.6i20NOJ2ITBGgEXdwsz0wfNocZQCssBqHsoozQySluaNuUQfjDpGogtIitNLY2L4HqEop f.F0pv.M7OhkPSbJzgZ13WaWayJ7F7xYYVfG9P0lGLaS7nDbnjVIujkeyAjpx_PA0kDVDgmloJsv 6gmYsudGJ9zTz2_YMkJLXF5tdR2rumzDbJXmFXQZcCCt9VSgUH8HKTBbtMKw_6.RzCXUW_j0wmIH pHI1z1NYC8gdXgp5QCL.ThuDnhoSgoxtLwIdVnDuBLv9jMKprim8PY2tOSVxv2WuMxlkVqU9gFJu emtAuCknG4TE2ZhSEcBDoX3CfLarDh3CEqOmHqb2.1WU85xycXjD.tlWHbnzEf9ixDVputlUsTzn l1jWY8NDbwDIYoY77C0GJkJtdfCkHbx1TTQ-- Subject: Re: [BUG]kernel softlockup due to sidtab_search_context run for long time because of too many sidtab context node To: Stephen Smalley , yangjihong , "paul@paul-moore.com" , "eparis@parisplace.org" , "selinux@tycho.nsa.gov" , Daniel J Walsh , Lukas Vrabec , Petr Lautrbach Cc: "linux-kernel@vger.kernel.org" References: <1BC3DBD98AD61A4A9B2569BC1C0B4437D5D1F3@DGGEMM506-MBS.china.huawei.com> <1513178296.19161.8.camel@tycho.nsa.gov> From: Casey Schaufler Message-ID: <23c51943-51a4-4478-760f-375d02caa39b@schaufler-ca.com> Date: Thu, 14 Dec 2017 08:18:07 -0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <1513178296.19161.8.camel@tycho.nsa.gov> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5427 Lines: 105 On 12/13/2017 7:18 AM, Stephen Smalley wrote: > On Wed, 2017-12-13 at 09:25 +0000, yangjihong wrote: >> Hello,  >> >> I am doing stressing testing on 3.10 kernel(centos 7.4), to >> constantly starting numbers of docker ontainers with selinux enabled, >> and after about 2 days, the kernel softlockup panic: >>    [] sched_show_task+0xb8/0x120 >>  [] show_lock_info+0x20f/0x3a0 >>  [] watchdog_timer_fn+0x1da/0x2f0 >>  [] ? watchdog_enable_all_cpus.part.4+0x40/0x40 >>  [] __hrtimer_run_queues+0xd2/0x260 >>  [] hrtimer_interrupt+0xb0/0x1e0 >>  [] local_apic_timer_interrupt+0x37/0x60 >>  [] smp_apic_timer_interrupt+0x50/0x140 >>  [] apic_timer_interrupt+0x6d/0x80 >>    [] ? sidtab_context_to_sid+0xb3/0x480 >>  [] ? sidtab_context_to_sid+0x110/0x480 >>  [] ? mls_setup_user_range+0x145/0x250 >>  [] security_get_user_sids+0x3f7/0x550 >>  [] sel_write_user+0x12b/0x210 >>  [] ? sel_write_member+0x200/0x200 >>  [] selinux_transaction_write+0x48/0x80 >>  [] vfs_write+0xbd/0x1e0 >>  [] SyS_write+0x7f/0xe0 >>  [] system_call_fastpath+0x16/0x1b >> >> My opinion: >> when the docker container starts, it would mount overlay filesystem >> with different selinux context, mount point such as:  >> overlay on >> /var/lib/docker/overlay2/be3ef517730d92fc4530e0e952eae4f6cb0f07b4bc32 >> 6cb07495ca08fc9ddb66/merged type overlay >> (rw,relatime,context="system_u:object_r:svirt_sandbox_file_t:s0:c414, >> c873",lowerdir=/var/lib/docker/overlay2/l/Z4U7WY6ASNV5CFWLADPARHHWY7: >> /var/lib/docker/overlay2/l/V2S3HOKEFEOQLHBVAL5WLA3YLS:/var/lib/docker >> /overlay2/l/46YGYO474KLOULZGDSZDW2JPRI,upperdir=/var/lib/docker/overl >> ay2/be3ef517730d92fc4530e0e952eae4f6cb0f07b4bc326cb07495ca08fc9ddb66/ >> diff,workdir=/var/lib/docker/overlay2/be3ef517730d92fc4530e0e952eae4f >> 6cb0f07b4bc326cb07495ca08fc9ddb66/work) >> shm on >> /var/lib/docker/containers/9fd65e177d2132011d7b422755793449c91327ca57 >> 7b8f5d9d6a4adf218d4876/shm type tmpfs >> (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:svirt_san >> dbox_file_t:s0:c414,c873",size=65536k) >> overlay on >> /var/lib/docker/overlay2/38d1544d080145c7d76150530d0255991dfb7258cbca >> 14ff6d165b94353eefab/merged type overlay >> (rw,relatime,context="system_u:object_r:svirt_sandbox_file_t:s0:c431, >> c651",lowerdir=/var/lib/docker/overlay2/l/3MQQXB4UCLFB7ANVRHPAVRCRSS: >> /var/lib/docker/overlay2/l/46YGYO474KLOULZGDSZDW2JPRI,upperdir=/var/l >> ib/docker/overlay2/38d1544d080145c7d76150530d0255991dfb7258cbca14ff6d >> 165b94353eefab/diff,workdir=/var/lib/docker/overlay2/38d1544d080145c7 >> d76150530d0255991dfb7258cbca14ff6d165b94353eefab/work) >> shm on >> /var/lib/docker/containers/662e7f798fc08b09eae0f0f944537a4bcedc1dcf05 >> a65866458523ffd4a71614/shm type tmpfs >> (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:svirt_san >> dbox_file_t:s0:c431,c651",size=65536k) >> >> sidtab_search_context check the context whether is in the sidtab >> list, If not found, a new node is generated and insert into the list, >> As the number of containers is increasing,  context nodes are also >> more and more, we tested the final number of nodes reached 300,000 +, >> sidtab_context_to_sid runtime needs 100-200ms, which will lead to the >> system softlockup. >> >> Is this a selinux bug? When filesystem umount, why context node is >> not deleted?  I cannot find the relevant function to delete the node >> in sidtab.c >> >> Thanks for reading and looking forward to your reply. > So, does docker just keep allocating a unique category set for every > new container, never reusing them even if the container is destroyed? > That would be a bug in docker IMHO. Or are you creating an unbounded > number of containers and never destroying the older ones? You can't reuse the security context. A process in ContainerA sends a labeled packet to MachineB. ContainerA goes away and its context is recycled in ContainerC. MachineB responds some time later, again with a labeled packet. ContainerC gets information intended for ContainerA, and uses the information to take over the Elbonian government. > On the selinux userspace side, we'd also like to eliminate the use of > /sys/fs/selinux/user (sel_write_user -> security_get_user_sids) > entirely, which is what triggered this for you. > > We cannot currently delete a sidtab node because we have no way of > knowing if there are any lingering references to the SID. Fixing that > would require reference-counted SIDs, which goes beyond just SELinux > since SIDs/secids are returned by LSM hooks and cached in other kernel > data structures. You could delete a sidtab node. The code already deals with unfindable SIDs. The issue is that eventually you run out of SIDs. Then you are forced to recycle SIDs, which leads to the overthrow of the Elbonian government. > sidtab_search_context() could no doubt be optimized for the negative > case; there was an earlier optimization for the positive case by adding > a cache to sidtab_context_to_sid() prior to calling it. It's a reverse > lookup in the sidtab. This seems like a bad idea.