Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753544AbdLNRAx (ORCPT ); Thu, 14 Dec 2017 12:00:53 -0500 Received: from sonic315-19.consmr.mail.bf2.yahoo.com ([74.6.134.193]:38108 "EHLO sonic315-19.consmr.mail.bf2.yahoo.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753099AbdLNRAw (ORCPT ); Thu, 14 Dec 2017 12:00:52 -0500 X-YMail-OSG: 8RDgi.0VM1nzmAWJG24sOSr2HPOwP7MEZW.kdMe6gQF0rcyDDv6pqqiGfnKydb9 o_akUQBpTNX4mk5qhzqQ4mtxMUeG17U4mtdefMDvDX_2udYkueRtDvy2mP66RE_HPQUwdvYTWr3z swGf.QXMGThEqPZQi6jvPzgkwg34ooz5lxFFTw5UqIf1sfuD5rI9cCcXS2O8br7R3rYCd7pqSbcw EoVUsTTDOl9gNv6ll7iB9lSmhKnZmX_K72krypQLuPpt2D.fRjuYzps3vmL1JW26zRxOq9MEeqki YPTP9qC86kCfwugJjFE_VM9vMYkTPRUQrLFEMNmDzfYGfRFV9S.vU2_xpXEdWmaatBgGnBYYLVv3 I8NVheLAKQ8.rnovW8XzFS9LlYHJcbVDYYoIrQQ0tY3oPK6egjGfnqSih9hShzK6BrL84x5f.BfS 3q4I.tihXDFmodsh37rrP12PjZ70iNXH5YjKY8NLJVcUW6f6lm2vX0X6nO.qkSbAdj4o_kVtLCyZ hNKsyOt0Pv2NjmUogWR3DMVsu5T0D9LKx5A-- Subject: Re: [BUG]kernel softlockup due to sidtab_search_context run for long time because of too many sidtab context node To: Stephen Smalley , yangjihong , "paul@paul-moore.com" , "eparis@parisplace.org" , "selinux@tycho.nsa.gov" , Daniel J Walsh , Lukas Vrabec , Petr Lautrbach Cc: "linux-kernel@vger.kernel.org" References: <1BC3DBD98AD61A4A9B2569BC1C0B4437D5D1F3@DGGEMM506-MBS.china.huawei.com> <1513178296.19161.8.camel@tycho.nsa.gov> <23c51943-51a4-4478-760f-375d02caa39b@schaufler-ca.com> <1513269771.18008.6.camel@tycho.nsa.gov> From: Casey Schaufler Message-ID: <79e41bd9-2570-7386-d462-d242a18fb786@schaufler-ca.com> Date: Thu, 14 Dec 2017 09:00:46 -0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <1513269771.18008.6.camel@tycho.nsa.gov> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6927 Lines: 162 On 12/14/2017 8:42 AM, Stephen Smalley wrote: > On Thu, 2017-12-14 at 08:18 -0800, Casey Schaufler wrote: >> On 12/13/2017 7:18 AM, Stephen Smalley wrote: >>> On Wed, 2017-12-13 at 09:25 +0000, yangjihong wrote: >>>> Hello,  >>>> >>>> I am doing stressing testing on 3.10 kernel(centos 7.4), to >>>> constantly starting numbers of docker ontainers with selinux >>>> enabled, >>>> and after about 2 days, the kernel softlockup panic: >>>>    [] sched_show_task+0xb8/0x120 >>>>  [] show_lock_info+0x20f/0x3a0 >>>>  [] watchdog_timer_fn+0x1da/0x2f0 >>>>  [] ? watchdog_enable_all_cpus.part.4+0x40/0x40 >>>>  [] __hrtimer_run_queues+0xd2/0x260 >>>>  [] hrtimer_interrupt+0xb0/0x1e0 >>>>  [] local_apic_timer_interrupt+0x37/0x60 >>>>  [] smp_apic_timer_interrupt+0x50/0x140 >>>>  [] apic_timer_interrupt+0x6d/0x80 >>>>    [] ? sidtab_context_to_sid+0xb3/0x480 >>>>  [] ? sidtab_context_to_sid+0x110/0x480 >>>>  [] ? mls_setup_user_range+0x145/0x250 >>>>  [] security_get_user_sids+0x3f7/0x550 >>>>  [] sel_write_user+0x12b/0x210 >>>>  [] ? sel_write_member+0x200/0x200 >>>>  [] selinux_transaction_write+0x48/0x80 >>>>  [] vfs_write+0xbd/0x1e0 >>>>  [] SyS_write+0x7f/0xe0 >>>>  [] system_call_fastpath+0x16/0x1b >>>> >>>> My opinion: >>>> when the docker container starts, it would mount overlay >>>> filesystem >>>> with different selinux context, mount point such as:  >>>> overlay on >>>> /var/lib/docker/overlay2/be3ef517730d92fc4530e0e952eae4f6cb0f07b4 >>>> bc32 >>>> 6cb07495ca08fc9ddb66/merged type overlay >>>> (rw,relatime,context="system_u:object_r:svirt_sandbox_file_t:s0:c >>>> 414, >>>> c873",lowerdir=/var/lib/docker/overlay2/l/Z4U7WY6ASNV5CFWLADPARHH >>>> WY7: >>>> /var/lib/docker/overlay2/l/V2S3HOKEFEOQLHBVAL5WLA3YLS:/var/lib/do >>>> cker >>>> /overlay2/l/46YGYO474KLOULZGDSZDW2JPRI,upperdir=/var/lib/docker/o >>>> verl >>>> ay2/be3ef517730d92fc4530e0e952eae4f6cb0f07b4bc326cb07495ca08fc9dd >>>> b66/ >>>> diff,workdir=/var/lib/docker/overlay2/be3ef517730d92fc4530e0e952e >>>> ae4f >>>> 6cb0f07b4bc326cb07495ca08fc9ddb66/work) >>>> shm on >>>> /var/lib/docker/containers/9fd65e177d2132011d7b422755793449c91327 >>>> ca57 >>>> 7b8f5d9d6a4adf218d4876/shm type tmpfs >>>> (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:svirt >>>> _san >>>> dbox_file_t:s0:c414,c873",size=65536k) >>>> overlay on >>>> /var/lib/docker/overlay2/38d1544d080145c7d76150530d0255991dfb7258 >>>> cbca >>>> 14ff6d165b94353eefab/merged type overlay >>>> (rw,relatime,context="system_u:object_r:svirt_sandbox_file_t:s0:c >>>> 431, >>>> c651",lowerdir=/var/lib/docker/overlay2/l/3MQQXB4UCLFB7ANVRHPAVRC >>>> RSS: >>>> /var/lib/docker/overlay2/l/46YGYO474KLOULZGDSZDW2JPRI,upperdir=/v >>>> ar/l >>>> ib/docker/overlay2/38d1544d080145c7d76150530d0255991dfb7258cbca14 >>>> ff6d >>>> 165b94353eefab/diff,workdir=/var/lib/docker/overlay2/38d1544d0801 >>>> 45c7 >>>> d76150530d0255991dfb7258cbca14ff6d165b94353eefab/work) >>>> shm on >>>> /var/lib/docker/containers/662e7f798fc08b09eae0f0f944537a4bcedc1d >>>> cf05 >>>> a65866458523ffd4a71614/shm type tmpfs >>>> (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:svirt >>>> _san >>>> dbox_file_t:s0:c431,c651",size=65536k) >>>> >>>> sidtab_search_context check the context whether is in the sidtab >>>> list, If not found, a new node is generated and insert into the >>>> list, >>>> As the number of containers is increasing,  context nodes are >>>> also >>>> more and more, we tested the final number of nodes reached >>>> 300,000 +, >>>> sidtab_context_to_sid runtime needs 100-200ms, which will lead to >>>> the >>>> system softlockup. >>>> >>>> Is this a selinux bug? When filesystem umount, why context node >>>> is >>>> not deleted?  I cannot find the relevant function to delete the >>>> node >>>> in sidtab.c >>>> >>>> Thanks for reading and looking forward to your reply. >>> So, does docker just keep allocating a unique category set for >>> every >>> new container, never reusing them even if the container is >>> destroyed?  >>> That would be a bug in docker IMHO.  Or are you creating an >>> unbounded >>> number of containers and never destroying the older ones? >> You can't reuse the security context. A process in ContainerA sends >> a labeled packet to MachineB. ContainerA goes away and its context >> is recycled in ContainerC. MachineB responds some time later, again >> with a labeled packet. ContainerC gets information intended for >> ContainerA, and uses the information to take over the Elbonian >> government. > Docker isn't using labeled networking (nor is anything else by default; > it is only enabled if explicitly configured). If labeled networking weren't an issue we'd have full security module stacking by now. Yes, it's an edge case. If you want to use labeled NFS or a local filesystem that gets mounted in each container (don't tell me that nobody would do that) you've got the same problem. >>> On the selinux userspace side, we'd also like to eliminate the use >>> of >>> /sys/fs/selinux/user (sel_write_user -> security_get_user_sids) >>> entirely, which is what triggered this for you. >>> >>> We cannot currently delete a sidtab node because we have no way of >>> knowing if there are any lingering references to the SID.  Fixing >>> that >>> would require reference-counted SIDs, which goes beyond just >>> SELinux >>> since SIDs/secids are returned by LSM hooks and cached in other >>> kernel >>> data structures. >> You could delete a sidtab node. The code already deals with >> unfindable >> SIDs. The issue is that eventually you run out of SIDs. Then you are >> forced to recycle SIDs, which leads to the overthrow of the Elbonian >> government. > We don't know when we can safely delete a sidtab node since SIDs aren't > reference counted and we can't know whether it is still in use > somewhere in the kernel. Doing so prematurely would lead to the SID > being remapped to the unlabeled context, and then likely to undesired > denials. I would suggest that if you delete a sidtab node and someone comes along later and tries to use it that denial is exactly what you would desire. I don't see any other rational action. >>> sidtab_search_context() could no doubt be optimized for the >>> negative >>> case; there was an earlier optimization for the positive case by >>> adding >>> a cache to sidtab_context_to_sid() prior to calling it.  It's a >>> reverse >>> lookup in the sidtab. >> This seems like a bad idea. > Not sure what you mean, but it can certainly be changed to at least use > a hash table for these reverse lookups. > >