From: domg472@gmail.com (Dominick Grift) Date: Wed, 6 Oct 2010 19:34:07 +0200 Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined In-Reply-To: <4CAC9BC0.9050205@tycho.ncsc.mil> References: <20101006102510.GA30175@localhost.localdomain> <4CAC9BC0.9050205@tycho.ncsc.mil> Message-ID: <20101006173405.GA2914@localhost.localdomain> To: refpolicy@oss.tresys.com List-Id: refpolicy.oss.tresys.com On Wed, Oct 06, 2010 at 11:54:40AM -0400, Paul Nuzzi wrote: > On 10/06/2010 06:25 AM, Dominick Grift wrote: > > Some more suggested changes.One of which is to not allow hadoop rc script domains and hadoop domain to not write log file (just append) > > I wonder if this revision still works for you. > > I am also still wondering about the file context specification. If i am correct, you state that some of them do not work for one reason or another. We should really try to make them all work else it does not make sense to specify them in the first place. > > I've had an issue with /var/zookeeper(/.*) and /var/lib/hadoop(.*)?/cache/hadoop/dfs/name(/.*)?. It seems the kernel file_contexts hits on /var(/.*) => var_t before getting to /var/zookeeper(/.*) => zookeeper_server_var_t. I added some comments inline. Feel free to undo/change any thing i added that broken the policy. That is strange /var/zookeeper(/.*)? should override /var(/.*)? for /var/zookeeper. > > I briefly tested your policy on the NameNode/JobTracker/SecondaryNameNode. It would not work in enforcing mode because you are not allowing the services to write log files. I guess that may be a bug in the various hadoop domains and/or something redirects to the log files. log files should be opened for append only. But i guess we will have to allow it. I wonder why this does not work for hadoop but it seems to work for zookeeper. > > Other comments inline. > > > Signed-off-by: Dominick Grift > > --- > > :100644 100644 2ecdde8... 73163db... M policy/modules/kernel/corenetwork.te.in > > :100644 100644 cad05ff... d2bc2b1... M policy/modules/roles/sysadm.te > > :000000 100644 0000000... 5935162... A policy/modules/services/hadoop.fc > > :000000 100644 0000000... cee7cd5... A policy/modules/services/hadoop.if > > :000000 100644 0000000... 515d2da... A policy/modules/services/hadoop.te > > :100644 100644 f976344... f1e6c9f... M policy/modules/system/unconfined.te > > policy/modules/kernel/corenetwork.te.in | 5 + > > policy/modules/roles/sysadm.te | 8 + > > policy/modules/services/hadoop.fc | 55 ++++ > > policy/modules/services/hadoop.if | 364 +++++++++++++++++++++++++++ > > policy/modules/services/hadoop.te | 410 +++++++++++++++++++++++++++++++ > > policy/modules/system/unconfined.te | 8 + > > 6 files changed, 850 insertions(+), 0 deletions(-) > > > > diff --git a/policy/modules/kernel/corenetwork.te.in b/policy/modules/kernel/corenetwork.te.in > > index 2ecdde8..73163db 100644 > > --- a/policy/modules/kernel/corenetwork.te.in > > +++ b/policy/modules/kernel/corenetwork.te.in > > @@ -105,6 +105,8 @@ network_port(giftd, tcp,1213,s0) > > network_port(git, tcp,9418,s0, udp,9418,s0) > > network_port(gopher, tcp,70,s0, udp,70,s0) > > network_port(gpsd, tcp,2947,s0) > > +network_port(hadoop_datanode, tcp, 50010,s0) > > +network_port(hadoop_namenode, tcp, 8020,s0) > > network_port(hddtemp, tcp,7634,s0) > > network_port(howl, tcp,5335,s0, udp,5353,s0) > > network_port(hplip, tcp,1782,s0, tcp,2207,s0, tcp,2208,s0, tcp, 8290,s0, tcp,50000,s0, tcp,50002,s0, tcp,8292,s0, tcp,9100,s0, tcp,9101,s0, tcp,9102,s0, tcp,9220,s0, tcp,9221,s0, tcp,9222,s0, tcp,9280,s0, tcp,9281,s0, tcp,9282,s0, tcp,9290,s0, tcp,9291,s0, tcp,9292,s0) > > @@ -211,6 +213,9 @@ network_port(xdmcp, udp,177,s0, tcp,177,s0) > > network_port(xen, tcp,8002,s0) > > network_port(xfs, tcp,7100,s0) > > network_port(xserver, tcp,6000-6020,s0) > > +network_port(zookeeper_client, tcp, 2181,s0) > > +network_port(zookeeper_election, tcp, 3888,s0) > > +network_port(zookeeper_leader, tcp, 2888,s0) > > network_port(zebra, tcp,2600-2604,s0, tcp,2606,s0, udp,2600-2604,s0, udp,2606,s0) > > network_port(zope, tcp,8021,s0) > > > > diff --git a/policy/modules/roles/sysadm.te b/policy/modules/roles/sysadm.te > > index cad05ff..d2bc2b1 100644 > > --- a/policy/modules/roles/sysadm.te > > +++ b/policy/modules/roles/sysadm.te > > @@ -152,6 +152,10 @@ optional_policy(` > > ') > > > > optional_policy(` > > + hadoop_run(sysadm_t, sysadm_r) > > +') > > + > > +optional_policy(` > > # allow system administrator to use the ipsec script to look > > # at things (e.g., ipsec auto --status) > > # probably should create an ipsec_admin role for this kind of thing > > @@ -392,6 +396,10 @@ optional_policy(` > > yam_run(sysadm_t, sysadm_r) > > ') > > > > +optional_policy(` > > + hadoop_zookeeper_run_client(sysadm_t, sysadm_r) > > +') > > + > > ifndef(`distro_redhat',` > > optional_policy(` > > auth_role(sysadm_r, sysadm_t) > > diff --git a/policy/modules/services/hadoop.fc b/policy/modules/services/hadoop.fc > > new file mode 100644 > > index 0000000..5935162 > > --- /dev/null > > +++ b/policy/modules/services/hadoop.fc > > @@ -0,0 +1,55 @@ > > +/etc/hadoop.*(/.*)? gen_context(system_u:object_r:hadoop_etc_t,s0) > > + > > +# Why do these regular expresions differ from the ones below (/etc/rc.d/init.d)? Which of the two works best? > > These work for Debian > > > +/etc/init\.d/hadoop-datanode -- gen_context(system_u:object_r:hadoop_datanode_initrc_exec_t,s0) > > +/etc/init\.d/hadoop-jobtracker -- gen_context(system_u:object_r:hadoop_jobtracker_initrc_exec_t,s0) > > +/etc/init\.d/hadoop-namenode -- gen_context(system_u:object_r:hadoop_namenode_initrc_exec_t,s0) > > +/etc/init\.d/hadoop-secondarynamenode -- gen_context(system_u:object_r:hadoop_secondarynamenode_initrc_exec_t,s0) > > +/etc/init\.d/hadoop-tasktracker -- gen_context(system_u:object_r:hadoop_tasktracker_initrc_exec_t,s0) > > +/etc/init\.d/zookeeper -- gen_context(system_u:object_r:zookeeper_server_initrc_exec_t,s0) > > + > > These work for Fedora/RHEL. I haven't fully tested on Debian. Do we want to keep the Debian contexts? i am not referring to the location/path but i am referring to the regex like: -(.*)? that you use below but not above. > > > +/etc/rc\.d/init\.d/hadoop-(.*)?-datanode -- gen_context(system_u:object_r:hadoop_datanode_initrc_exec_t,s0) > > +/etc/rc\.d/init\.d/hadoop-(.*)?-jobtracker -- gen_context(system_u:object_r:hadoop_jobtracker_initrc_exec_t,s0) > > +/etc/rc\.d/init\.d/hadoop-(.*)?-namenode -- gen_context(system_u:object_r:hadoop_namenode_initrc_exec_t,s0) > > +/etc/rc\.d/init\.d/hadoop-(.*)?-secondarynamenode -- gen_context(system_u:object_r:hadoop_secondarynamenode_initrc_exec_t,s0) > > +/etc/rc\.d/init\.d/hadoop-(.*)?-tasktracker -- gen_context(system_u:object_r:hadoop_tasktracker_initrc_exec_t,s0) > > +/etc/rc\.d/init\.d/hadoop-zookeeper -- gen_context(system_u:object_r:zookeeper_server_initrc_exec_t,s0) > > + > > +/etc/zookeeper.*(/.*)? gen_context(system_u:object_r:zookeeper_etc_t,s0) > > + > > +/usr/lib/hadoop(.*)?/bin/hadoop -- gen_context(system_u:object_r:hadoop_exec_t,s0) > > + > > +/usr/bin/zookeeper-client -- gen_context(system_u:object_r:zookeeper_exec_t,s0) > > +/usr/bin/zookeeper-server -- gen_context(system_u:object_r:zookeeper_server_exec_t,s0) > > + > > +/var/zookeeper(/.*)? gen_context(system_u:object_r:zookeeper_server_var_t,s0) > > +/var/lib/zookeeper(/.*)? gen_context(system_u:object_r:zookeeper_server_var_t,s0) > > + > > +/var/lib/hadoop(.*)? -d gen_context(system_u:object_r:hadoop_var_lib_t,s0) > > +/var/lib/hadoop(.*)?/cache/hadoop/dfs/data(/.*)? gen_context(system_u:object_r:hadoop_datanode_var_lib_t,s0) > > +/var/lib/hadoop(.*)?/cache/hadoop/dfs/name(/.*)? gen_context(system_u:object_r:hadoop_namenode_var_lib_t,s0) > > +/var/lib/hadoop(.*)?/cache/hadoop/dfs/namesecondary(/.*)? gen_context(system_u:object_r:hadoop_secondarynamenode_var_lib_t,s0) > > +/var/lib/hadoop(.*)?/cache/hadoop/mapred/local/jobTracker(/.*)? gen_context(system_u:object_r:hadoop_jobtracker_var_lib_t,s0) > > +/var/lib/hadoop(.*)?/cache/hadoop/mapred/local/taskTracker(/.*)? gen_context(system_u:object_r:hadoop_tasktracker_var_lib_t,s0) > > + > > +/var/lock/subsys/hadoop-datanode -- gen_context(system_u:object_r:hadoop_datanode_lock_t,s0) > > +/var/lock/subsys/hadoop-namenode -- gen_context(system_u:object_r:hadoop_namenode_lock_t,s0) > > +/var/lock/subsys/hadoop-jobtracker -- gen_context(system_u:object_r:hadoop_jobtracker_lock_t,s0) > > +/var/lock/subsys/hadoop-tasktracker -- gen_context(system_u:object_r:hadoop_tasktracker_lock_t,s0) > > +/var/lock/subsys/hadoop-secondarynamenode -- gen_context(system_u:object_r:hadoop_secondarynamenode_lock_t,s0) > > + > > +/var/log/hadoop(.*)? -d gen_context(system_u:object_r:hadoop_log_t,s0) > > +/var/log/hadoop(.*)?/hadoop-hadoop-datanode-(.*)? gen_context(system_u:object_r:hadoop_datanode_log_t,s0) > > +/var/log/hadoop(.*)?/hadoop-hadoop-jobtracker-(.*)? gen_context(system_u:object_r:hadoop_jobtracker_log_t,s0) > > +/var/log/hadoop(.*)?/hadoop-hadoop-namenode-(.*)? gen_context(system_u:object_r:hadoop_namenode_log_t,s0) > > +/var/log/hadoop(.*)?/hadoop-hadoop-secondarynamenode-(.*)? gen_context(system_u:object_r:hadoop_secondarynamenode_log_t,s0) > > +/var/log/hadoop(.*)?/hadoop-hadoop-tasktracker-(.*)? gen_context(system_u:object_r:hadoop_tasktracker_log_t,s0) > > +/var/log/hadoop(.*)?/history(/.*)? gen_context(system_u:object_r:hadoop_jobtracker_log_t,s0) > > +/var/log/zookeeper(/.*)? gen_context(system_u:object_r:zookeeper_log_t,s0) > > + > > +/var/run/hadoop(.*)? -d gen_context(system_u:object_r:hadoop_var_run_t,s0) > > +/var/run/hadoop(.*)?/hadoop-hadoop-datanode\.pid -- gen_context(system_u:object_r:hadoop_datanode_initrc_var_run_t,s0) > > +/var/run/hadoop(.*)?/hadoop-hadoop-namenode\.pid -- gen_context(system_u:object_r:hadoop_namenode_initrc_var_run_t,s0) > > +/var/run/hadoop(.*)?/hadoop-hadoop-jobtracker\.pid -- gen_context(system_u:object_r:hadoop_jobtracker_initrc_var_run_t,s0) > > +/var/run/hadoop(.*)?/hadoop-hadoop-tasktracker\.pid -- gen_context(system_u:object_r:hadoop_tasktracker_initrc_var_run_t,s0) > > +/var/run/hadoop(.*)?/hadoop-hadoop-secondarynamenode\.pid -- gen_context(system_u:object_r:hadoop_secondarynamenode_initrc_var_run_t,s0) > > diff --git a/policy/modules/services/hadoop.if b/policy/modules/services/hadoop.if > > new file mode 100644 > > index 0000000..cee7cd5 > > --- /dev/null > > +++ b/policy/modules/services/hadoop.if > > @@ -0,0 +1,364 @@ > > +## Software for reliable, scalable, distributed computing. > > + > > +####################################### > > +## > > +## The template to define a hadoop domain. > > +## > > +## > > +## > > +## Domain prefix to be used. > > +## > > +## > > +# > > +template(`hadoop_domain_template',` > > + gen_require(` > > + attribute hadoop_domain; > > + type hadoop_log_t, hadoop_var_lib_t, hadoop_var_run_t; > > + type hadoop_exec_t, hadoop_hsperfdata_t, hadoop_etc_t; > > + ') > > + > > + ######################################## > > + # > > + # Shared declarations. > > + # > > + > > + type hadoop_$1_t, hadoop_domain; > > + domain_type(hadoop_$1_t) > > + domain_entry_file(hadoop_$1_t, hadoop_exec_t) > > + > > + type hadoop_$1_initrc_t; > > + type hadoop_$1_initrc_exec_t; > > + init_script_domain(hadoop_$1_initrc_t, hadoop_$1_initrc_exec_t) > > + > > + role system_r types { hadoop_$1_initrc_t hadoop_$1_t }; > > + > > + type hadoop_$1_lock_t; > > + files_lock_file(hadoop_$1_lock_t) > > + > > + type hadoop_$1_log_t; > > + logging_log_file(hadoop_$1_log_t) > > + > > + type hadoop_$1_var_lib_t; > > + files_type(hadoop_$1_var_lib_t) > > + > > + type hadoop_$1_initrc_var_run_t; > > + files_pid_file(hadoop_$1_initrc_var_run_t) > > + > > + type hadoop_$1_tmp_t; > > + files_tmp_file(hadoop_$1_tmp_t) > > + > > + #################################### > > + # > > + # Shared hadoop_$1 initrc policy. > > + # > > + > > + allow hadoop_$1_initrc_t self:capability { setuid setgid }; > > + dontaudit hadoop_$1_initrc_t self:capability sys_tty_config; > > + allow hadoop_$1_initrc_t self:fifo_file rw_fifo_file_perms; > > + allow hadoop_$1_initrc_t self:process setsched; > > + > > + allow hadoop_$1_initrc_t hadoop_$1_t:process { signal signull }; > > + > > + manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_lock_t, hadoop_$1_lock_t) > > + files_lock_filetrans(hadoop_$1_initrc_t, hadoop_$1_lock_t, file) > > + > > + append_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + create_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + read_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + setattr_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + filetrans_pattern(hadoop_$1_initrc_t, hadoop_log_t, hadoop_$1_log_t, file) > > + logging_search_logs(hadoop_$1_initrc_t) > > + > > + manage_dirs_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_var_run_t) > > + manage_files_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_var_run_t) > > + > > + manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_initrc_var_run_t, hadoop_$1_initrc_var_run_t) > > + filetrans_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_$1_initrc_var_run_t, file) > > + files_search_pids(hadoop_$1_initrc_t) > > + > > + domtrans_pattern(hadoop_$1_initrc_t, hadoop_exec_t, hadoop_$1_t) > > + > > + can_exec(hadoop_$1_initrc_t, hadoop_etc_t) > > + > > + kernel_read_kernel_sysctls(hadoop_$1_initrc_t) > > + kernel_read_sysctl(hadoop_$1_initrc_t) > > + kernel_read_system_state(hadoop_$1_initrc_t) > > + > > + corecmd_exec_bin(hadoop_$1_initrc_t) > > + corecmd_exec_shell(hadoop_$1_initrc_t) > > + > > + files_dontaudit_list_default(hadoop_$1_initrc_t) > > + files_read_etc_files(hadoop_$1_initrc_t) > > + files_read_usr_files(hadoop_$1_initrc_t) > > + > > + fs_getattr_xattr_fs(hadoop_$1_initrc_t) > > + > > + init_rw_utmp(hadoop_$1_initrc_t) > > + init_use_script_ptys(hadoop_$1_initrc_t) > > + > > + logging_send_audit_msgs(hadoop_$1_initrc_t) > > + logging_send_syslog_msg(hadoop_$1_initrc_t) > > + > > + miscfiles_read_localization(hadoop_$1_initrc_t) > > + > > + term_use_generic_ptys(hadoop_$1_initrc_t) > > + > > + consoletype_exec(hadoop_$1_initrc_t) > > + > > + userdom_dontaudit_search_user_home_dirs(hadoop_$1_initrc_t) > > + > > + optional_policy(` > > + nscd_socket_use(hadoop_$1_initrc_t) > > + ') > > + > > + #################################### > > + # > > + # Shared hadoop_$1 policy. > > + # > > + > > + allow hadoop_$1_t self:process execmem; > > + dontaudit hadoop_$1_t self:netlink_route_socket rw_netlink_socket_perms; > > + allow hadoop_$1_t self:fifo_file rw_fifo_file_perms; > > + allow hadoop_$1_t self:tcp_socket create_stream_socket_perms; > > + allow hadoop_$1_t self:udp_socket create_socket_perms; > > + > > + allow hadoop_$1_t hadoop_domain:process signull; > > + > > + manage_dirs_pattern(hadoop_$1_t, hadoop_$1_var_lib_t, hadoop_$1_var_lib_t) > > + manage_files_pattern(hadoop_$1_t, hadoop_$1_var_lib_t, hadoop_$1_var_lib_t) > > + filetrans_pattern(hadoop_$1_t, hadoop_var_lib_t, hadoop_$1_var_lib_t, { file dir }) > > + files_search_var_lib(hadoop_$1_t) > > + > > + manage_dirs_pattern(hadoop_$1_t, hadoop_hsperfdata_t, hadoop_hsperfdata_t) > > + files_tmp_filetrans(hadoop_$1_t, hadoop_hsperfdata_t, dir) > > + > > + append_files_pattern(hadoop_$1_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + create_files_pattern(hadoop_$1_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + read_files_pattern(hadoop_$1_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + setattr_files_pattern(hadoop_$1_t, hadoop_$1_log_t, hadoop_$1_log_t) > > + filetrans_pattern(hadoop_$1_t, hadoop_log_t, hadoop_$1_log_t, file) > > + logging_search_logs(hadoop_$1_t) > > + > > + allow hadoop_$1_t hadoop_var_run_t:dir getattr_dir_perms; > > + files_search_pids(hadoop_$1_t) > > + > > + manage_files_pattern(hadoop_$1_t, hadoop_$1_tmp_t, hadoop_$1_tmp_t) > > + filetrans_pattern(hadoop_$1_t, hadoop_hsperfdata_t, hadoop_$1_tmp_t, file) > > + > > + can_exec(hadoop_$1_t, hadoop_etc_t) > > + > > + kernel_read_network_state(hadoop_$1_t) > > + kernel_read_system_state(hadoop_$1_t) > > + > > + corecmd_exec_bin(hadoop_$1_t) > > + corecmd_exec_shell(hadoop_$1_t) > > + > > + corenet_all_recvfrom_unlabeled(hadoop_$1_t) > > + corenet_all_recvfrom_netlabel(hadoop_$1_t) > > + corenet_tcp_bind_all_nodes(hadoop_$1_t) > > + corenet_tcp_sendrecv_generic_if(hadoop_$1_t) > > + corenet_tcp_sendrecv_generic_node(hadoop_$1_t) > > + corenet_tcp_sendrecv_all_ports(hadoop_$1_t) > > + # Hadoop uses high ordered random ports for services > > + # If permanent ports are chosen, remove line below and lock down > > + corenet_tcp_connect_generic_port(hadoop_$1_t) > > + corenet_udp_sendrecv_generic_if(hadoop_$1_t) > > + corenet_udp_sendrecv_all_nodes(hadoop_$1_t) > > + corenet_udp_bind_all_nodes(hadoop_$1_t) > > + > > + dev_read_rand(hadoop_$1_t) > > + dev_read_urand(hadoop_$1_t) > > + dev_read_sysfs(hadoop_$1_t) > > + > > + files_read_etc_files(hadoop_$1_t) > > + > > + miscfiles_read_localization(hadoop_$1_t) > > + > > + sysnet_read_config(hadoop_$1_t) > > + > > + java_exec(hadoop_$1_t) > > + > > + optional_policy(` > > + nscd_socket_use(hadoop_$1_t) > > + ') > > +') > > + > > +######################################## > > +## > > +## Execute hadoop in the > > +## hadoop domain. > > +## > > +## > > +## > > +## Domain allowed to transition. > > +## > > +## > > +# > > +interface(`hadoop_domtrans',` > > + gen_require(` > > + type hadoop_t, hadoop_exec_t; > > + ') > > + > > + libs_search_lib($1) > > + domtrans_pattern($1, hadoop_exec_t, hadoop_t) > > +') > > + > > +######################################## > > +## > > +## Execute hadoop in the hadoop domain, > > +## and allow the specified role the > > +## hadoop domain. > > +## > > +## > > +## > > +## Domain allowed to transition. > > +## > > +## > > +## > > +## > > +## Role allowed access. > > +## > > +## > > +## > > +# > > +interface(`hadoop_run',` > > + gen_require(` > > + type hadoop_t; > > + ') > > + > > + hadoop_domtrans($1) > > + role $2 types hadoop_t; > > + > > + allow $1 hadoop_t:process { ptrace signal_perms }; > > + ps_process_pattern($1, hadoop_t) > > +') > > + > > +######################################## > > +## > > +## Execute zookeeper client in the > > +## zookeeper client domain. > > +## > > +## > > +## > > +## Domain allowed to transition. > > +## > > +## > > +# > > +interface(`hadoop_domtrans_zookeeper_client',` > > + gen_require(` > > + type zookeeper_t, zookeeper_exec_t; > > + ') > > + > > + corecmd_search_bin($1) > > + domtrans_pattern($1, zookeeper_exec_t, zookeeper_t) > > +') > > + > > +######################################## > > +## > > +## Execute zookeeper server in the > > +## zookeeper server domain. > > +## > > +## > > +## > > +## Domain allowed to transition. > > +## > > +## > > +# > > +interface(`hadoop_domtrans_zookeeper_server',` > > + gen_require(` > > + type zookeeper_server_t, zookeeper_server_exec_t; > > + ') > > + > > + corecmd_search_bin($1) > > + domtrans_pattern($1, zookeeper_server_exec_t, zookeeper_server_t) > > +') > > + > > +######################################## > > +## > > +## Execute zookeeper server in the > > +## zookeeper domain. > > +## > > +## > > +## > > +## Domain allowed to transition. > > +## > > +## > > +# > > +interface(`hadoop_zookeeper_initrc_domtrans_server',` > > + gen_require(` > > + type zookeeper_server_initrc_exec_t; > > + ') > > + > > + init_labeled_script_domtrans($1, zookeeper_server_initrc_exec_t) > > +') > > + > > +######################################## > > +## > > +## Execute zookeeper client in the > > +## zookeeper client domain, and allow the > > +## specified role the zookeeper client domain. > > +## > > +## > > +## > > +## Domain allowed to transition. > > +## > > +## > > +## > > +## > > +## Role allowed access. > > +## > > +## > > +## > > +# > > +interface(`hadoop_zookeeper_run_client',` > > + gen_require(` > > + type zookeeper_t; > > + ') > > + > > + hadoop_domtrans_zookeeper_client($1) > > + role $2 types zookeeper_t; > > + > > + allow $1 zookeeper_t:process { ptrace signal_perms }; > > + ps_process_pattern($1, zookeeper_t) > > +') > > + > > +######################################## > > +## > > +## Read hadoop configuration files. > > +## > > +## > > +## > > +## Domain allowed access. > > +## > > +## > > +# > > +interface(`hadoop_read_config_files',` > > + gen_require(` > > + type hadoop_etc_t; > > + ') > > + > > + files_search_etc($1) > > + read_files_pattern($1, hadoop_etc_t, hadoop_etc_t) > > + read_lnk_files_pattern($1, hadoop_etc_t, hadoop_etc_t) > > +') > > + > > +######################################## > > +## > > +## Execute hadoop configuration files. > > +## > > +## > > +## > > +## Domain allowed access. > > +## > > +## > > +# > > +interface(`hadoop_exec_config_files',` > > + gen_require(` > > + type hadoop_etc_t; > > + ') > > + > > + files_search_etc($1) > > + allow $1 hadoop_etc_t:dir search_dir_perms; > > + allow $1 hadoop_etc_t:lnk_file read_lnk_file_perms; > > + can_exec($1, hadoop_etc_t) > > +') > > diff --git a/policy/modules/services/hadoop.te b/policy/modules/services/hadoop.te > > new file mode 100644 > > index 0000000..515d2da > > --- /dev/null > > +++ b/policy/modules/services/hadoop.te > > @@ -0,0 +1,410 @@ > > +policy_module(hadoop, 1.0.0) > > + > > +######################################## > > +# > > +# Hadoop declarations. > > +# > > + > > +attribute hadoop_domain; > > + > > +type hadoop_t; > > +type hadoop_exec_t; > > +application_domain(hadoop_t, hadoop_exec_t) > > +ubac_constrained(hadoop_t) > > + > > +type hadoop_etc_t; > > +files_config_file(hadoop_etc_t) > > + > > +type hadoop_var_lib_t; > > +files_type(hadoop_var_lib_t) > > + > > +type hadoop_log_t; > > +logging_log_file(hadoop_log_t) > > + > > +type hadoop_var_run_t; > > +files_pid_file(hadoop_var_run_t) > > + > > +type hadoop_tmp_t; > > +files_tmp_file(hadoop_tmp_t) > > +ubac_constrained(hadoop_tmp_t) > > + > > +type hadoop_hsperfdata_t; > > +files_tmp_file(hadoop_hsperfdata_t) > > +ubac_constrained(hadoop_hsperfdata_t) > > + > > +hadoop_domain_template(datanode) > > +hadoop_domain_template(jobtracker) > > +hadoop_domain_template(namenode) > > +hadoop_domain_template(secondarynamenode) > > +hadoop_domain_template(tasktracker) > > + > > +######################################## > > +# > > +# Hadoop zookeeper client declarations. > > +# > > + > > +type zookeeper_t; > > +type zookeeper_exec_t; > > +application_domain(zookeeper_t, zookeeper_exec_t) > > +ubac_constrained(zookeeper_t) > > + > > +type zookeeper_etc_t; > > +files_config_file(zookeeper_etc_t) > > + > > +type zookeeper_log_t; > > +logging_log_file(zookeeper_log_t) > > + > > +type zookeeper_tmp_t; > > +files_tmp_file(zookeeper_tmp_t) > > +ubac_constrained(zookeeper_tmp_t) > > + > > +######################################## > > +# > > +# Hadoop zookeeper server declarations. > > +# > > + > > +type zookeeper_server_t; > > +type zookeeper_server_exec_t; > > +init_daemon_domain(zookeeper_server_t, zookeeper_server_exec_t) > > + > > +type zookeeper_server_initrc_exec_t; > > +init_script_file(zookeeper_server_initrc_exec_t) > > + > > +type zookeeper_server_var_t; > > +files_type(zookeeper_server_var_t) > > + > > +type zookeeper_server_var_run_t; > > +files_pid_file(zookeeper_server_var_run_t) > > + > > +type zookeeper_server_tmp_t; > > +files_tmp_file(zookeeper_server_tmp_t) > > + > > +######################################## > > +# > > +# Hadoop policy. > > +# > > + > > +allow hadoop_t self:capability sys_resource; > > +allow hadoop_t self:process { signal_perms setrlimit execmem }; > > +dontaudit hadoop_t self:netlink_route_socket rw_netlink_socket_perms; > > +allow hadoop_t self:fifo_file rw_fifo_file_perms; > > +allow hadoop_t self:key write; > > +allow hadoop_t self:tcp_socket create_stream_socket_perms; > > +allow hadoop_t self:udp_socket create_socket_perms; > > + > > +allow hadoop_t hadoop_domain:process signull; > > + > > +read_files_pattern(hadoop_t, hadoop_etc_t, hadoop_etc_t) > > +read_lnk_files_pattern(hadoop_t, hadoop_etc_t, hadoop_etc_t) > > +can_exec(hadoop_t, hadoop_etc_t) > > + > > +manage_dirs_pattern(hadoop_t, hadoop_var_lib_t, hadoop_var_lib_t) > > +manage_files_pattern(hadoop_t, hadoop_var_lib_t, hadoop_var_lib_t) > > +# not sure: > > +files_search_var_lib(hadoop_t) > > + > > +manage_dirs_pattern(hadoop_t, hadoop_log_t, hadoop_log_t) > > +# not sure: > > +logging_search_logs(hadoop_t) > > +getattr_dirs_pattern(hadoop_t, hadoop_var_run_t, hadoop_var_run_t) > > +# not sure: > > +files_search_pids(hadoop_t) > > I don't think you need files_search_var_lib, logging_search_logs or files_search_pids. I wonder where that access is allowed, but if youre sure then this can be removed. > > > +manage_dirs_pattern(hadoop_t, hadoop_tmp_t, hadoop_tmp_t) > > +manage_files_pattern(hadoop_t, hadoop_tmp_t, hadoop_tmp_t) > > +filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) > > + > > +manage_dirs_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_hsperfdata_t) > > +files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) > > + > > +kernel_read_network_state(hadoop_t) > > +kernel_read_system_state(hadoop_t) > > + > > +corecmd_exec_bin(hadoop_t) > > +corecmd_exec_shell(hadoop_t) > > + > > +corenet_all_recvfrom_unlabeled(hadoop_t) > > +corenet_all_recvfrom_netlabel(hadoop_t) > > +corenet_sendrecv_hadoop_namenode_client_packets(hadoop_t) > > +corenet_sendrecv_portmap_client_packets(hadoop_t) > > +corenet_sendrecv_zope_client_packets(hadoop_t) > > +corenet_tcp_bind_all_nodes(hadoop_t) > > +corenet_tcp_connect_hadoop_namenode_port(hadoop_t) > > +corenet_tcp_connect_hadoop_datanode_port(hadoop_t) > > +corenet_tcp_connect_portmap_port(hadoop_t) > > +corenet_tcp_connect_zope_port(hadoop_t) > > +corenet_tcp_sendrecv_all_nodes(hadoop_t) > > +corenet_tcp_sendrecv_all_ports(hadoop_t) > > +corenet_tcp_sendrecv_generic_if(hadoop_t) > > +# Hadoop uses high ordered random ports for services > > +# If permanent ports are chosen, remove line below and lock down > > +corenet_tcp_connect_generic_port(hadoop_t) > > +corenet_udp_bind_all_nodes(hadoop_t) > > +corenet_udp_sendrecv_all_nodes(hadoop_t) > > +corenet_udp_sendrecv_all_ports(hadoop_t) > > +corenet_udp_sendrecv_generic_if(hadoop_t) > > + > > +dev_read_rand(hadoop_t) > > +dev_read_sysfs(hadoop_t) > > +dev_read_urand(hadoop_t) > > + > > +files_dontaudit_list_default(hadoop_t) > > +files_dontaudit_search_spool(hadoop_t) > > +files_read_usr_files(hadoop_t) > > +# Seems a bit coarse > > +files_read_all_files(hadoop_t) > > Needed for "hadoop fs -put" which adds any file to the distributed file system. > Is there a finer grain alternative? if any file, then i guess there is no better alternative. > > > +fs_getattr_xattr_fs(hadoop_t) > > + > > +java_exec(hadoop_t) > > + > > +miscfiles_read_localization(hadoop_t) > > + > > +userdom_dontaudit_search_user_home_dirs(hadoop_t) > > +userdom_use_user_terminals(hadoop_t) > > + > > +optional_policy(` > > + nis_use_ypbind(hadoop_t) > > +') > > + > > +optional_policy(` > > + nscd_socket_use(hadoop_t) > > +') > > + > > +######################################## > > +# > > +# Hadoop datanode policy. > > +# > > + > > +allow hadoop_datanode_t self:process signal; > > + > > +manage_dirs_pattern(hadoop_datanode_t, hadoop_var_lib_t, hadoop_var_lib_t) > > + > > +corenet_sendrecv_hadoop_datanode_client_packets(hadoop_datanode_t) > > +corenet_sendrecv_hadoop_datanode_server_packets(hadoop_datanode_t) > > +corenet_sendrecv_hadoop_namenode_client_packets(hadoop_datanode_t) > > +corenet_tcp_bind_hadoop_datanode_port(hadoop_datanode_t) > > +corenet_tcp_connect_hadoop_datanode_port(hadoop_datanode_t) > > +corenet_tcp_connect_hadoop_namenode_port(hadoop_datanode_t) > > + > > +fs_getattr_xattr_fs(hadoop_datanode_t) > > + > > +######################################## > > +# > > +# Hadoop jobtracker policy. > > +# > > + > > +manage_dirs_pattern(hadoop_jobtracker_t, hadoop_var_lib_t, hadoop_var_lib_t) > > + > > +manage_dirs_pattern(hadoop_jobtracker_t, hadoop_jobtracker_log_t, hadoop_jobtracker_log_t) > > + > > +corenet_sendrecv_hadoop_datanode_client_packets(hadoop_jobtracker_t) > > +corenet_sendrecv_hadoop_namenode_client_packets(hadoop_jobtracker_t) > > +corenet_sendrecv_zope_server_packets(hadoop_jobtracker_t) > > +corenet_tcp_bind_zope_port(hadoop_jobtracker_t) > > +corenet_tcp_connect_hadoop_datanode_port(hadoop_jobtracker_t) > > +corenet_tcp_connect_hadoop_namenode_port(hadoop_jobtracker_t) > > + > > +######################################## > > +# > > +# Hadoop namenode policy. > > +# > > + > > +manage_dirs_pattern(hadoop_namenode_t, hadoop_var_lib_t, hadoop_var_lib_t) > > +manage_files_pattern(hadoop_namenode_t, hadoop_var_lib_t, hadoop_var_lib_t) > > + > > +corenet_sendrecv_hadoop_namenode_client_packets(hadoop_namenode_t) > > +corenet_sendrecv_hadoop_namenode_server_packets(hadoop_namenode_t) > > +corenet_tcp_bind_hadoop_namenode_port(hadoop_namenode_t) > > +corenet_tcp_connect_hadoop_namenode_port(hadoop_namenode_t) > > + > > +######################################## > > +# > > +# Hadoop secondary namenode policy. > > +# > > + > > +manage_dirs_pattern(hadoop_secondarynamenode_t, hadoop_var_lib_t, hadoop_var_lib_t) > > + > > +corenet_sendrecv_hadoop_namenode_client_packets(hadoop_secondarynamenode_t) > > +corenet_tcp_connect_hadoop_namenode_port(hadoop_secondarynamenode_t) > > + > > +######################################## > > +# > > +# Hadoop tasktracker policy. > > +# > > + > > +allow hadoop_tasktracker_t self:process signal; > > + > > +manage_dirs_pattern(hadoop_tasktracker_t, hadoop_var_lib_t, hadoop_var_lib_t) > > + > > +manage_dirs_pattern(hadoop_tasktracker_t, hadoop_tasktracker_log_t, hadoop_tasktracker_log_t); > > +filetrans_pattern(hadoop_tasktracker_t, hadoop_log_t, hadoop_tasktracker_log_t, dir) > > + > > +corenet_sendrecv_hadoop_datanode_client_packets(hadoop_tasktracker_t) > > +corenet_sendrecv_hadoop_namenode_client_packets(hadoop_tasktracker_t) > > +corenet_sendrecv_zope_client_packets(hadoop_tasktracker_t) > > +corenet_tcp_connect_hadoop_datanode_port(hadoop_tasktracker_t) > > +corenet_tcp_connect_hadoop_namenode_port(hadoop_tasktracker_t) > > +corenet_tcp_connect_zope_port(hadoop_tasktracker_t) > > + > > +fs_associate(hadoop_tasktracker_t) > > +fs_getattr_xattr_fs(hadoop_tasktracker_t) > > + > > +######################################## > > +# > > +# Hadoop zookeeper client policy. > > +# > > + > > +allow zookeeper_t self:process { getsched signal_perms execmem }; > > +dontaudit zookeeper_t self:netlink_route_socket rw_netlink_socket_perms; > > +allow zookeeper_t self:fifo_file rw_fifo_file_perms; > > +allow zookeeper_t self:tcp_socket create_stream_socket_perms; > > +allow zookeeper_t self:udp_socket create_socket_perms; > > + > > +allow zookeeper_t zookeeper_server_t:process signull; > > + > > +read_files_pattern(zookeeper_t, zookeeper_etc_t, zookeeper_etc_t) > > +read_lnk_files_pattern(zookeeper_t, zookeeper_etc_t, zookeeper_etc_t) > > + > > +setattr_dirs_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t) > > +append_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t) > > +create_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t) > > +read_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t) > > +setattr_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t) > > +logging_log_filetrans(zookeeper_t, zookeeper_log_t, file) > > + > > +manage_dirs_pattern(zookeeper_t, hadoop_hsperfdata_t, hadoop_hsperfdata_t) > > +files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) > > + > > +manage_files_pattern(zookeeper_t, zookeeper_tmp_t, zookeeper_tmp_t) > > +filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) > > + > > +can_exec(zookeeper_t, zookeeper_exec_t) > > + > > +kernel_read_network_state(zookeeper_t) > > +kernel_read_system_state(zookeeper_t) > > + > > +corecmd_exec_bin(zookeeper_t) > > +corecmd_exec_shell(zookeeper_t) > > + > > +corenet_all_recvfrom_unlabeled(zookeeper_t) > > +corenet_all_recvfrom_netlabel(zookeeper_t) > > +corenet_sendrecv_zookeeper_client_client_packets(zookeeper_t) > > +corenet_tcp_bind_all_nodes(zookeeper_t) > > +corenet_tcp_connect_zookeeper_client_port(zookeeper_t) > > +corenet_tcp_sendrecv_all_nodes(zookeeper_t) > > +corenet_tcp_sendrecv_all_ports(zookeeper_t) > > +corenet_tcp_sendrecv_generic_if(zookeeper_t) > > +# Hadoop uses high ordered random ports for services > > +# If permanent ports are chosen, remove line below and lock down > > +corenet_tcp_connect_generic_port(zookeeper_t) > > +corenet_udp_bind_all_nodes(zookeeper_t) > > +corenet_udp_sendrecv_all_nodes(zookeeper_t) > > +corenet_udp_sendrecv_all_ports(zookeeper_t) > > +corenet_udp_sendrecv_generic_if(zookeeper_t) > > + > > +dev_read_rand(zookeeper_t) > > +dev_read_sysfs(zookeeper_t) > > +dev_read_urand(zookeeper_t) > > + > > +files_dontaudit_list_default(zookeeper_t) > > +files_read_etc_files(zookeeper_t) > > +files_read_usr_files(zookeeper_t) > > + > > +miscfiles_read_localization(zookeeper_t) > > + > > +sysnet_read_config(zookeeper_t) > > + > > +userdom_dontaudit_search_user_home_dirs(zookeeper_t) > > +userdom_use_user_terminals(zookeeper_t) > > + > > +java_exec(zookeeper_t) > > + > > +optional_policy(` > > + nscd_socket_use(zookeeper_t) > > +') > > + > > +######################################## > > +# > > +# Hadoop zookeeper server policy. > > +# > > + > > +allow zookeeper_server_t self:capability kill; > > +allow zookeeper_server_t self:process { execmem getsched signal_perms }; > > +allow zookeeper_server_t self:fifo_file rw_fifo_file_perms; > > +allow zookeeper_server_t self:netlink_route_socket rw_netlink_socket_perms; > > +allow zookeeper_server_t self:tcp_socket create_stream_socket_perms; > > +allow zookeeper_server_t self:udp_socket create_socket_perms; > > + > > +read_files_pattern(zookeeper_server_t, zookeeper_etc_t, zookeeper_etc_t) > > +read_lnk_files_pattern(zookeeper_server_t, zookeeper_etc_t, zookeeper_etc_t) > > + > > +manage_dirs_pattern(zookeeper_server_t, zookeeper_server_var_t, zookeeper_server_var_t) > > +manage_files_pattern(zookeeper_server_t, zookeeper_server_var_t, zookeeper_server_var_t) > > +files_var_lib_filetrans(zookeeper_server_t, zookeeper_server_var_t, { dir file }) > > + > > +setattr_dirs_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t) > > +append_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t) > > +create_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t) > > +read_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t) > > +setattr_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t) > > +logging_log_filetrans(zookeeper_server_t, zookeeper_log_t, file) > > + > > +manage_files_pattern(zookeeper_server_t, zookeeper_server_var_run_t, zookeeper_server_var_run_t) > > +files_pid_filetrans(zookeeper_server_t, zookeeper_server_var_run_t, file) > > + > > +manage_files_pattern(zookeeper_server_t, zookeeper_server_tmp_t, zookeeper_server_tmp_t) > > +filetrans_pattern(zookeeper_server_t, hadoop_hsperfdata_t, zookeeper_server_tmp_t, file) > > + > > +manage_dirs_pattern(zookeeper_server_t, hadoop_hsperfdata_t, hadoop_hsperfdata_t) > > +files_tmp_filetrans(zookeeper_server_t, hadoop_hsperfdata_t, dir) > > + > > +can_exec(zookeeper_server_t, zookeeper_server_exec_t) > > + > > +kernel_read_network_state(zookeeper_server_t) > > +kernel_read_system_state(zookeeper_server_t) > > + > > +corecmd_exec_bin(zookeeper_server_t) > > +corecmd_exec_shell(zookeeper_server_t) > > + > > +corenet_all_recvfrom_unlabeled(zookeeper_server_t) > > +corenet_all_recvfrom_netlabel(zookeeper_server_t) > > +corenet_sendrecv_zookeeper_election_client_packets(zookeeper_server_t) > > +corenet_sendrecv_zookeeper_leader_client_packets(zookeeper_server_t) > > +corenet_sendrecv_zookeeper_client_server_packets(zookeeper_server_t) > > +corenet_sendrecv_zookeeper_election_server_packets(zookeeper_server_t) > > +corenet_sendrecv_zookeeper_leader_server_packets(zookeeper_server_t) > > +corenet_tcp_bind_all_nodes(zookeeper_server_t) > > +corenet_tcp_bind_zookeeper_client_port(zookeeper_server_t) > > +corenet_tcp_bind_zookeeper_election_port(zookeeper_server_t) > > +corenet_tcp_bind_zookeeper_leader_port(zookeeper_server_t) > > +corenet_tcp_connect_zookeeper_election_port(zookeeper_server_t) > > +corenet_tcp_connect_zookeeper_leader_port(zookeeper_server_t) > > +corenet_tcp_sendrecv_generic_if(zookeeper_server_t) > > +corenet_tcp_sendrecv_generic_node(zookeeper_server_t) > > +corenet_tcp_sendrecv_all_ports(zookeeper_server_t) > > +# Hadoop uses high ordered random ports for services > > +# If permanent ports are chosen, remove line below and lock down > > +corenet_tcp_connect_generic_port(zookeeper_server_t) > > +corenet_udp_sendrecv_generic_if(zookeeper_server_t) > > +corenet_udp_sendrecv_all_nodes(zookeeper_server_t) > > +corenet_udp_sendrecv_all_ports(zookeeper_server_t) > > +corenet_udp_bind_all_nodes(zookeeper_server_t) > > + > > +dev_read_rand(zookeeper_server_t) > > +dev_read_sysfs(zookeeper_server_t) > > +dev_read_urand(zookeeper_server_t) > > + > > +files_read_etc_files(zookeeper_server_t) > > +files_read_usr_files(zookeeper_server_t) > > + > > +fs_getattr_xattr_fs(zookeeper_server_t) > > + > > +logging_send_syslog_msg(zookeeper_server_t) > > + > > +miscfiles_read_localization(zookeeper_server_t) > > + > > +sysnet_read_config(zookeeper_server_t) > > + > > +java_exec(zookeeper_server_t) > > diff --git a/policy/modules/system/unconfined.te b/policy/modules/system/unconfined.te > > index f976344..f1e6c9f 100644 > > --- a/policy/modules/system/unconfined.te > > +++ b/policy/modules/system/unconfined.te > > @@ -118,6 +118,10 @@ optional_policy(` > > ') > > > > optional_policy(` > > + hadoop_run(unconfined_t, unconfined_r) > > +') > > + > > +optional_policy(` > > inn_domtrans(unconfined_t) > > ') > > > > @@ -210,6 +214,10 @@ optional_policy(` > > xserver_domtrans(unconfined_t) > > ') > > > > +optional_policy(` > > + hadoop_zookeeper_run_client(unconfined_t, unconfined_r) > > +') > > + > > ######################################## > > # > > # Unconfined Execmem Local policy > > > > > > > > _______________________________________________ > > refpolicy mailing list > > refpolicy at oss.tresys.com > > http://oss.tresys.com/mailman/listinfo/refpolicy > > _______________________________________________ > refpolicy mailing list > refpolicy at oss.tresys.com > http://oss.tresys.com/mailman/listinfo/refpolicy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available Url : http://oss.tresys.com/pipermail/refpolicy/attachments/20101006/4ed62cfb/attachment-0001.bin