From: pjnuzzi@tycho.ncsc.mil (Paul Nuzzi) Date: Thu, 23 Sep 2010 09:54:28 -0400 Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined In-Reply-To: <4C98E699.2040802@gmail.com> References: <20100921090159.GA11192@localhost.localdomain> <4C98D248.9000803@tycho.ncsc.mil> <4C98D9FC.50002@gmail.com> <4C98DE9A.5060908@tycho.ncsc.mil> <4C98E699.2040802@gmail.com> Message-ID: <4C9B5C14.20305@tycho.ncsc.mil> To: refpolicy@oss.tresys.com List-Id: refpolicy.oss.tresys.com On 09/21/2010 01:08 PM, Dominick Grift wrote: > On 09/21/2010 06:34 PM, Paul Nuzzi wrote: >> On 09/21/2010 12:14 PM, Dominick Grift wrote: >>> On 09/21/2010 05:42 PM, Paul Nuzzi wrote: >>>> On 09/21/2010 05:02 AM, Dominick Grift wrote: >>>>> Well ive rewritten the policy as much as i ca with the information that i currently have. >>>>> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy. >>>> >>>> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy. >>>> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file. >>> >>> With that in mind, it looks like the policy has some duplicate rules. >>> >>>>> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then? >>>>> A lot of other things that arent, clear and/ or make no sense. >>>>> I have also left out things that i think, should be handled differently. >>>> >>>> hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster. >>> >>> And who what runs it? who or/and what transitions to the hadoop_t domain? >> >> All of the users run it. Users and the sysadm need to transition to the hadoop_t domain. > > ok so you can transition to it by creating a custom module with the > following: > > hadoop_run(sysadm_t, sysadm_r) > > Can you confirm that this works? They are transitioning correctly for sysadm_u. >>>>> It would be cool if someone could test this policy and provide feedback in the shape of avc denials. >>>> >>>> I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module. >>>> Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified. >>> >>> Thanks i fixed that file context specification now. >>> >>> Were you able to run the init script domains in permissive mode? Does it >>> work when you use run_init? Do the initrc domains properly transition to >>> the main domains in permissive mode? >> >> None of the pseudo initrc domains transitioned to the target domain using run_init. > > Any avc denials related to this? Because the domain transitions are > specified in policy (example: hadoop_datanode_initrc_t -> hadoop_exec_t > -> hadoop_datanode_t) > >> >>> Could you provides some avc denials of that? > >> There doesn't seem to be any denials for the domtrans. > > So the domain transition does not occur but no avc denials are showed? > that is strange. > maybe semodule -DB will expose some related information. Also check for > SELINUX_ERR (grep -i SELINUX_ERR /var/log/audit/audit.log) > > Are the executables properly labelled? I don't know if you changed anything with the new patch but they seem to be transitioning correctly. I added a separate module with hadoop_run(sysadm_t, sysadm_r) and hadoop_run(unconfined_t, unconfined_r). To get it to compile you need to add a gen_require hadoop_exec_t to hadoop_domtrans. >> >>> You should also specify file contexts for the pid files and lock files. >> >> system_u:system_r:hadoop_datanode_initrc_t:s0 0 S 489 3125 1 1 80 0 - 579640 futex_ ? 00:00:02 java >> system_u:system_r:hadoop_namenode_initrc_t:s0 0 S 489 3376 1 2 80 0 - 581189 futex_ ? 00:00:02 java >> system_u:system_r:zookeeper_server_t:s0 0 S 488 3598 1 0 80 0 - 496167 futex_ ? 00:00:00 java >> >> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_datanode_var_run_t:s0 hadoop-hadoop-datanode.pid >> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_namenode_var_run_t:s0 hadoop-hadoop-namenode.pid > >> >> -rw-r--r--. root root system_u:object_r:hadoop_datanode_initrc_lock_t:s0 /var/lock/subsys/hadoop-datanode >> -rw-r--r--. root root system_u:object_r:hadoop_namenode_initrc_lock_t:s0 /var/lock/subsys/hadoop-namenode >> >>>> >>>> #============= zookeeper_server_t ============== >>>> allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans }; >>>> allow zookeeper_server_t net_conf_t:file { read getattr open }; >>>> allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect }; >>> >>> What port is it connecting and binding sockets to? Why are they not >>> labelled? >> >> I left out the networking since I built it as a module. I haven't had luck running refpolicy on Fedora. The corenet_* functions might need to be written if refpolicy doesn't want all the ports permanently defined. > > You could patch fedoras selinux-policy rpm that is what i usually do. > Anyways i will just assume its the ports we declared. >> >>>> allow zookeeper_server_t self:process execmem; >>>> allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen }; >>>> >>> >>> I will add the above rules to the policy that i have, except for the >>> bind/connect to generic port types as this seems like a bad idea to me. >> >> I think I left out binding to generic ports in my policy. >> >>> Were there no denials left for the zookeeper client? Did you use >>> zookeeper_run_client() to transition to the zookeeper_t domain? >> >> zookeeper_client transitioned to the unconfined_java_t domain so there were no denials. I ran your patched policy without any modifications. >> > > Because you probably ran is in the unconfined domain. You should use the > zookeeper_run_client() that my patch provides, so that you can > transition to the confined domain. Looks like that transitions correctly when I add zookeeper_run_client(unconfined_t, unconfined_r). Thanks for taking an interest in the patch. How do we want to merge your changes with mine? > I will add file context specifications for the locks and pids you have > reported. > >>>>> Some properties of this policy: >>>>> >>>>> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server >>>>> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r) >>>>> The zookeeper server seems to be an ordinary init daemon domain. >>>>> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain. >>>>> >>>>> Signed-off-by: Dominick Grift >>>>> _______________________________________________ >>>>> refpolicy mailing list >>>>> refpolicy at oss.tresys.com >>>>> http://oss.tresys.com/mailman/listinfo/refpolicy >>> >>> >>> >>> >>> _______________________________________________ >>> refpolicy mailing list >>> refpolicy at oss.tresys.com >>> http://oss.tresys.com/mailman/listinfo/refpolicy >> > >