From: pjnuzzi@tycho.ncsc.mil (Paul Nuzzi) Date: Fri, 01 Oct 2010 15:06:31 -0400 Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined In-Reply-To: <20101001180127.GA23025@localhost.localdomain> References: <20100921195753.GA5706@localhost.localdomain> <1285099440.1806.13.camel@jeremy-ubuntu> <4C9B5262.7080405@tycho.ncsc.mil> <1285338053.1772.90.camel@jeremy-ubuntu> <4CA0E75A.4080406@tycho.ncsc.mil> <4CA4E77C.9040907@tycho.ncsc.mil> <20101001120217.GA14548@localhost.localdomain> <4CA5FB87.9080909@tycho.ncsc.mil> <20101001180127.GA23025@localhost.localdomain> Message-ID: <4CA63137.1020800@tycho.ncsc.mil> To: refpolicy@oss.tresys.com List-Id: refpolicy.oss.tresys.com On 10/01/2010 02:01 PM, Dominick Grift wrote: > On Fri, Oct 01, 2010 at 11:17:27AM -0400, Paul Nuzzi wrote: >> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>> I updated the patch based on recommendations from the mailing list. >>>> All of hadoop's services are included in one module instead of >>>> individual ones. Unconfined and sysadm roles are given access to >>>> hadoop and zookeeper client domain transitions. The services are started >>>> using run_init. Let me know what you think. >>> >>> Why do some hadoop domain need to manage generic tmp? >>> >>> files_manage_generic_tmp_dirs(zookeeper_t) >>> files_manage_generic_tmp_dirs(hadoop_t) >>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>> files_manage_generic_tmp_files(hadoop_$1_t) >>> files_manage_generic_tmp_dirs(hadoop_$1_t) >> >> This has to be done for Java JMX to work. All of the files are written to >> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >> all the files for each service are labeled with hadoop_*_tmp_t. The first service >> will end up owning the directory if it is not labeled tmp_t. >> >>> You probably need: >>> >>> files_search_pids() and files_search_locks() for hadoop_$1_initrc_t >>> becuase it needs to traverse /var/run and /var/lock/subsys to be able to manage its objects there. >> >>> Can use rw_fifo_file_perms here: >>> >>> allow hadoop_$1_initrc_t self:fifo_file { read write getattr ioctl }; >>> >>> Might want to split this into hadoop_read_config_files and hadoop_exec_config_files. >>> >>> hadoop_rx_etc(hadoop_$1_initrc_t) >>> >>> This seems wrong. Why does it need that? use files_search_var_lib() if possible: >>> >>> files_read_var_lib_files(hadoop_$1_t) >>> >>> This is not a declaration and might want to use filetrans_pattern() instead: >>> >>> type_transition hadoop_$1_initrc_t hadoop_var_run_t:file hadoop_$1_initrc_var_run_t; >> >> Changed. Thanks for the comments. >> >>> Other then the above, there are some style issues: >>> >>> http://oss.tresys.com/projects/refpolicy/wiki/StyleGuide >>> >>> But i can help clean that up once above issues are resolved. >>> >> >> Is there a style checking script for refpolicy patches similar to the Linux kernel? > > Not that i am aware of. > Are you sure that your entries in hadoop.fc work? You could check by intentionally mislabel the paths and children with chcon and then see if restorecon restores everything properly Based on testing the paths get labelled correctly with restorecon. I am having an issue with the kernel not labelling files and directories correctly because of wildcards in /var/lib/hadoop. I gave the services enough permission to relabel what they needed during runtime. I didn't want to hard code the directory names because the policy would lose version independence. >> >> >> Signed-off-by: Paul Nuzzi