2010-09-21 09:02:04

by domg472

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

Well ive rewritten the policy as much as i ca with the information that i currently have.
Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.
Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
A lot of other things that arent, clear and/ or make no sense.
I have also left out things that i think, should be handled differently.

It would be cool if someone could test this policy and provide feedback in the shape of avc denials.

Some properties of this policy:

The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
The zookeeper server seems to be an ordinary init daemon domain.
Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.

Signed-off-by: Dominick Grift <[email protected]>
---
:100644 100644 2ecdde8... 7a1b5de... M policy/modules/kernel/corenetwork.te.in
:000000 100644 0000000... bce5d29... A policy/modules/services/hadoop.fc
:000000 100644 0000000... 462d851... A policy/modules/services/hadoop.if
:000000 100644 0000000... 880f09a... A policy/modules/services/hadoop.te
policy/modules/kernel/corenetwork.te.in | 4 +
policy/modules/services/hadoop.fc | 34 +++
policy/modules/services/hadoop.if | 294 +++++++++++++++++++++++++++
policy/modules/services/hadoop.te | 339 +++++++++++++++++++++++++++++++
4 files changed, 671 insertions(+), 0 deletions(-)

diff --git a/policy/modules/kernel/corenetwork.te.in b/policy/modules/kernel/corenetwork.te.in
index 2ecdde8..7a1b5de 100644
--- a/policy/modules/kernel/corenetwork.te.in
+++ b/policy/modules/kernel/corenetwork.te.in
@@ -105,6 +105,7 @@ network_port(giftd, tcp,1213,s0)
network_port(git, tcp,9418,s0, udp,9418,s0)
network_port(gopher, tcp,70,s0, udp,70,s0)
network_port(gpsd, tcp,2947,s0)
+network_port(hadoop_namenode, tcp, 8020,s0)
network_port(hddtemp, tcp,7634,s0)
network_port(howl, tcp,5335,s0, udp,5353,s0)
network_port(hplip, tcp,1782,s0, tcp,2207,s0, tcp,2208,s0, tcp, 8290,s0, tcp,50000,s0, tcp,50002,s0, tcp,8292,s0, tcp,9100,s0, tcp,9101,s0, tcp,9102,s0, tcp,9220,s0, tcp,9221,s0, tcp,9222,s0, tcp,9280,s0, tcp,9281,s0, tcp,9282,s0, tcp,9290,s0, tcp,9291,s0, tcp,9292,s0)
@@ -211,6 +212,9 @@ network_port(xdmcp, udp,177,s0, tcp,177,s0)
network_port(xen, tcp,8002,s0)
network_port(xfs, tcp,7100,s0)
network_port(xserver, tcp,6000-6020,s0)
+network_port(zookeeper_client, tcp, 2181,s0)
+network_port(zookeeper_election, tcp, 3888,s0)
+network_port(zookeeper_leader, tcp, 2888,s0)
network_port(zebra, tcp,2600-2604,s0, tcp,2606,s0, udp,2600-2604,s0, udp,2606,s0)
network_port(zope, tcp,8021,s0)

diff --git a/policy/modules/services/hadoop.fc b/policy/modules/services/hadoop.fc
new file mode 100644
index 0000000..bce5d29
--- /dev/null
+++ b/policy/modules/services/hadoop.fc
@@ -0,0 +1,34 @@
+/etc/hadoop.*(/.*)? gen_context(system_u:object_r:hadoop_etc_t,s0)
+/etc/zookeeper(/.*)? gen_context(system_u:object_r:zookeeper_etc_t,s0)
+/etc/zookeeper\.dist(/.*)? gen_context(system_u:object_r:zookeeper_etc_t,s0)
+
+/etc/rc\.d/init\.d/hadoop-(.*)?-datanode -- gen_context(system_u:object_r:hadoop_datanode_initrc_exec_t,s0)
+/etc/rc\.d/init\.d/hadoop-(.*)?-jobtracker -- gen_context(system_u:object_r:hadoop_jobtracker_initrc_exec_t,s0)
+/etc/rc\.d/init\.d/hadoop-(.*)?-namenode -- gen_context(system_u:object_r:hadoop_namenode_initrc_exec_t,s0)
+/etc/rc\.d/init\.d/hadoop-(.*)?-secondarynamenode -- gen_context(system_u:object_r:hadoop_secondarynamenode_initrc_exec_t,s0)
+/etc/rc\.d/init\.d/hadoop-(.*)?-tasktracker -- gen_context(system_u:object_r:hadoop_tasktracker_initrc_exec_t,s0)
+/etc/rc\.d/init\.d/hadoop-zookeeper -- gen_context(system_u:object_r:zookeeper_server_initrc_exec_t,s0)
+
+/usr/lib/hadoop(.*)?/bin/hadoop -- gen_context(system_u:object_r:hadoop_exec_t,s0)
+
+/usr/bin/zookeeper-client -- gen_context(system_u:object_r:zookeeper_exec_t,s0)
+/usr/bin/zookeeper-server -- gen_context(system_u:object_r:zookeeper_server_exec_t,s0)
+
+/var/zookeeper(/.*)? gen_context(system_u:object_r:zookeeper_server_var_t,s0)
+
+/var/lib/hadoop(.*)? gen_context(system_u:object_r:hadoop_var_lib_t,s0)
+/var/lib/hadoop(.*)?/cache/hadoop/dfs/data(/.*)? gen_context(system_u:object_r:hadoop_datanode_var_lib_t,s0)
+/var/lib/hadoop(.*)?/cache/hadoop/mapred/local/jobTracker(/.*)? gen_context(system_u:object_r:hadoop_jobtracker_var_lib_t,s0)
+/var/lib/hadoop(.*)?/cache/hadoop/dfs/name(/.*)? gen_context(system_u:object_r:hadoop_namenode_var_lib_t,s0)
+/var/lib/hadoop(.*)?/cache/hadoop/dfs/namesecondary(/.*)? gen_context(system_u:object_r:hadoop_secondarynamenode_var_lib_t,s0)
+/var/lib/hadoop(.*)?/cache/hadoop/mapred/local/taskTracker(/.*)? gen_context(system_u:object_r:hadoop_tasktracker_data_t,s0)
+
+/var/log/hadoop(.*)? gen_context(system_u:object_r:hadoop_log_t,s0)
+/var/log/hadoop(.*)?/hadoop-hadoop-datanode-(.*)? gen_context(system_u:object_r:hadoop_datanode_log_t,s0)
+/var/log/hadoop(.*)?/hadoop-hadoop-jobtracker-(.*)? gen_context(system_u:object_r:hadoop_jobtracker_log_t,s0)
+/var/log/hadoop(.*)?/hadoop-hadoop-namenode-(.*)? gen_context(system_u:object_r:hadoop_namenode_log_t,s0)
+/var/log/hadoop(.*)?/hadoop-hadoop-secondarynamenode-(.*)? gen_context(system_u:object_r:hadoop_secondarynamenode_log_t,s0)
+/var/log/hadoop(.*)?/hadoop-hadoop-tasktracker-(.*)? gen_context(system_u:object_r:hadoop_tasktracker_log_t,s0)
+/var/log/zookeeper(/.*)? gen_context(system_u:object_r:zookeeper_log_t,s0)
+
+/var/run/hadoop(.*)? gen_context(system_u:object_r:hadoop_var_run_t,s0)
diff --git a/policy/modules/services/hadoop.if b/policy/modules/services/hadoop.if
new file mode 100644
index 0000000..462d851
--- /dev/null
+++ b/policy/modules/services/hadoop.if
@@ -0,0 +1,294 @@
+## <summary>Software for reliable, scalable, distributed computing.</summary>
+
+#######################################
+## <summary>
+## The template to define a hadoop domain.
+## </summary>
+## <param name="domain_prefix">
+## <summary>
+## Domain prefix to be used.
+## </summary>
+## </param>
+#
+template(`hadoop_domain_template',`
+ gen_require(`
+ attribute hadoop_domain;
+ type hadoop_log_t, hadoop_var_lib_t, hadoop_var_run_t;
+ ')
+
+ ########################################
+ #
+ # Shared declarations.
+ #
+
+ type hadoop_$1_t, hadoop_domain;
+ domain_type(hadoop_$1_t)
+
+ hadoop_exec_entry_type(hadoop_$1_t)
+
+ type hadoop_$1_initrc_t;
+ type hadoop_$1_initrc_exec_t;
+ init_script_domain(hadoop_$1_initrc_t, hadoop_$1_initrc_exec_t)
+
+ role system_r types { hadoop_$1_initrc_t hadoop_$1_t };
+
+ # This will need a file context specification.
+ type hadoop_$1_initrc_lock_t;
+ files_lock_file(hadoop_$1_initrc_lock_t)
+
+ type hadoop_$1_log_t;
+ logging_log_file(hadoop_$1_log_t)
+
+ type hadoop_$1_var_lib_t;
+ files_type(hadoop_$1_var_lib_t)
+
+ # This will need a file context specification.
+ type hadoop_$1_var_run_t;
+ files_pid_file(hadoop_$1_var_run_t)
+
+ type hadoop_$1_tmp_t;
+ files_tmp_file(hadoop_$1_tmp_t)
+
+ # permissive hadoop_$1_t;
+ # permissive hadoop_$1_initrc_exec_t;
+
+ ####################################
+ #
+ # Shared hadoop_$1 initrc policy.
+ #
+
+ allow hadoop_$1_initrc_t self:capability { setuid setgid };
+ dontaudit hadoop_$1_initrc_t self:capability sys_tty_config;
+
+ allow hadoop_$1_initrc_t hadoop_$1_initrc_lock_t:file manage_file_perms;
+ files_lock_filetrans(hadoop_$1_initrc_t, hadoop_$1_initrc_lock_t, file)
+
+ append_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t)
+ create_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t)
+ read_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t)
+ setattr_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t)
+ filetrans_pattern(hadoop_$1_initrc_t, hadoop_log_t, hadoop_$1_log_t, file)
+ logging_search_logs(hadoop_$1_initrc_t)
+
+ allow hadoop_$1_initrc_t hadoop_$1_var_run_t:file manage_file_perms;
+ filetrans_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_$1_var_run_t, file)
+ files_search_pids(hadoop_$1_initrc_t)
+
+ allow hadoop_$1_initrc_t hadoop_$1_t:process { signal signull };
+
+ hadoop_spec_domtrans(hadoop_$1_initrc_t, hadoop_$1_t)
+
+ kernel_read_kernel_sysctls(hadoop_$1_initrc_t)
+ kernel_read_sysctl(hadoop_$1_initrc_t)
+
+ corecmd_exec_all_executables(hadoop_$1_initrc_t)
+
+ init_rw_utmp(hadoop_$1_initrc_t)
+
+ # This can be removed on anything post-el5
+ libs_use_ld_so(hadoop_$1_t)
+ libs_use_shared_libs(hadoop_$1_t)
+
+ logging_send_audit_msgs(hadoop_$1_initrc_t)
+ logging_send_syslog_msg(hadoop_$1_initrc_t)
+
+ ####################################
+ #
+ # Shared hadoop_$1 policy.
+ #
+
+ allow hadoop_$1_t hadoop_domain:process signull;
+
+ # This can be removed on anything post-el5
+ libs_use_ld_so(hadoop_$1_t)
+ libs_use_shared_libs(hadoop_$1_t)
+
+')
+
+########################################
+## <summary>
+## Execute hadoop in the
+## hadoop domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+#
+interface(`hadoop_domtrans',`
+ gen_require(`
+ type hadoop_t, hadoop_t;
+ ')
+
+ files_search_usr($1)
+ libs_search_lib($1)
+ domtrans_pattern($1, hadoop_exec_t, hadoop_t)
+')
+
+########################################
+## <summary>
+## Make hadoop executable files an
+## entrypoint for the specified domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## The domain for which hadoop_exec_t
+## is an entrypoint.
+## </summary>
+## </param>
+#
+interface(`hadoop_exec_entry_type',`
+ gen_require(`
+ type hadoop_exec_t;
+ ')
+
+ domain_entry_file($1, hadoop_exec_t)
+')
+
+########################################
+## <summary>
+## Execute hadoop in the hadoop domain,
+## and allow the specified role the
+## hadoop domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+## <param name="role">
+## <summary>
+## Role allowed access.
+## </summary>
+## </param>
+## <rolecap/>
+#
+interface(`hadoop_run',`
+ gen_require(`
+ type hadoop_t;
+ ')
+
+ hadoop_domtrans($1)
+ role $2 types hadoop_t;
+
+ allow $1 hadoop_t:process { ptrace signal_perms };
+ ps_process_pattern($1, hadoop_t)
+')
+
+########################################
+## <summary>
+## Execute hadoop executable files
+## in the specified domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+## <param name="target_domain">
+## <summary>
+## Domain to transition to.
+## </summary>
+## </param>
+#
+interface(`hadoop_spec_domtrans',`
+ gen_require(`
+ type hadoop_exec_t;
+ ')
+
+ files_search_usr($1)
+ libs_search_lib($1)
+ domain_transition_pattern($1, hadoop_exec_t, $2)
+')
+
+########################################
+## <summary>
+## Execute zookeeper client in the
+## zookeeper client domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+#
+interface(`zookeeper_domtrans_client',`
+ gen_require(`
+ type zookeeper_t, zookeeper_exec_t;
+ ')
+
+ corecmd_search_bin($1)
+ files_search_usr($1)
+ domtrans_pattern($1, zookeeper_exec_t, zookeeper_t)
+')
+
+########################################
+## <summary>
+## Execute zookeeper server in the
+## zookeeper server domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+#
+interface(`zookeeper_domtrans_server',`
+ gen_require(`
+ type zookeeper_server_t, zookeeper_server_exec_t;
+ ')
+
+ corecmd_search_bin($1)
+ files_search_usr($1)
+ domtrans_pattern($1, zookeeper_server_exec_t, zookeeper_server_t)
+')
+
+########################################
+## <summary>
+## Execute zookeeper server in the
+## zookeeper domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+#
+interface(`zookeeper_initrc_domtrans_server',`
+ gen_require(`
+ type zookeeper_server_initrc_exec_t;
+ ')
+
+ init_labeled_script_domtrans($1, zookeeper_server_initrc_exec_t)
+')
+
+########################################
+## <summary>
+## Execute zookeeper client in the
+## zookeeper client domain, and allow the
+## specified role the zookeeper client domain.
+## </summary>
+## <param name="domain">
+## <summary>
+## Domain allowed to transition.
+## </summary>
+## </param>
+## <param name="role">
+## <summary>
+## Role allowed access.
+## </summary>
+## </param>
+## <rolecap/>
+#
+interface(`zookeeper_run_client',`
+ gen_require(`
+ type zookeeper_t;
+ ')
+
+ zookeeper_domtrans_client($1)
+ role $2 types zookeeper_t;
+
+ allow $1 zookeeper_t:process { ptrace signal_perms };
+ ps_process_pattern($1, zookeeper_t)
+')
diff --git a/policy/modules/services/hadoop.te b/policy/modules/services/hadoop.te
new file mode 100644
index 0000000..880f09a
--- /dev/null
+++ b/policy/modules/services/hadoop.te
@@ -0,0 +1,339 @@
+policy_module(hadoop, 1.0.0)
+
+########################################
+#
+# Hadoop declarations.
+#
+
+attribute hadoop_domain;
+
+# What or who runs this?
+type hadoop_t;
+type hadoop_exec_t;
+domain_type(hadoop_t)
+domain_entry_file(hadoop_t, hadoop_exec_t)
+
+type hadoop_etc_t;
+files_config_file(hadoop_etc_t)
+
+type hadoop_var_lib_t;
+files_type(hadoop_var_lib_t)
+
+type hadoop_log_t;
+logging_log_file(hadoop_log_t)
+
+type hadoop_var_run_t;
+files_pid_file(hadoop_var_run_t)
+
+type hadoop_tmp_t;
+files_tmp_file(hadoop_tmp_t)
+
+# permissive hadoop_t;
+
+hadoop_domain_template(datanode)
+hadoop_domain_template(jobtracker)
+hadoop_domain_template(namenode)
+hadoop_domain_template(secondarynamenode)
+hadoop_domain_template(tasktracker)
+
+########################################
+#
+# Hadoop zookeeper client declarations.
+#
+
+type zookeeper_t;
+type zookeeper_exec_t;
+application_domain(zookeeper_t, zookeeper_exec_t)
+ubac_constrained(zookeeper_t)
+
+type zookeeper_etc_t;
+files_config_file(zookeeper_etc_t)
+
+type zookeeper_log_t;
+logging_log_file(zookeeper_log_t)
+
+type zookeeper_tmp_t;
+files_tmp_file(zookeeper_tmp_t)
+ubac_constrained(zookeeper_tmp_t)
+
+# permissive zookeeper_t;
+
+########################################
+#
+# Hadoop zookeeper server declarations.
+#
+
+type zookeeper_server_t;
+type zookeeper_server_exec_t;
+init_daemon_domain(zookeeper_server_t, zookeeper_server_exec_t)
+
+type zookeeper_server_initrc_exec_t;
+init_script_file(zookeeper_server_initrc_exec_t)
+
+type zookeeper_server_var_t;
+files_type(zookeeper_server_var_t)
+
+# This will need a file context specification.
+type zookeeper_server_var_run_t;
+files_pid_file(zookeeper_server_var_run_t)
+
+type zookeeper_server_tmp_t;
+files_tmp_file(zookeeper_server_tmp_t)
+
+# permissive zookeeper_server_t;
+
+########################################
+#
+# Hadoop policy.
+#
+
+allow hadoop_t self:capability sys_resource;
+allow hadoop_t self:process { getsched setsched signal signull setrlimit };
+allow hadoop_t self:fifo_file rw_fifo_file_perms;
+allow hadoop_t self:key write;
+# This probably needs to be allowed.
+dontaudit hadoop_t self:netlink_route_socket rw_netlink_socket_perms;
+allow hadoop_t self:tcp_socket create_stream_socket_perms;
+allow hadoop_t self:udp_socket create_socket_perms;
+
+read_files_pattern(hadoop_t, hadoop_etc_t, hadoop_etc_t)
+read_lnk_files_pattern(hadoop_t, hadoop_etc_t, hadoop_etc_t)
+can_exec(hadoop_t, hadoop_etc_t)
+
+manage_dirs_pattern(hadoop_t, hadoop_var_lib_t, hadoop_var_lib_t)
+manage_files_pattern(hadoop_t, hadoop_var_lib_t, hadoop_var_lib_t)
+
+manage_dirs_pattern(hadoop_t, hadoop_log_t, hadoop_log_t)
+
+# Who or what creates /var/run/hadoop?
+getattr_dirs_pattern(hadoop_t, hadoop_var_run_t, hadoop_var_run_t)
+
+manage_dirs_pattern(hadoop_t, hadoop_tmp_t, hadoop_tmp_t)
+manage_files_pattern(hadoop_t, hadoop_tmp_t, hadoop_tmp_t)
+files_tmp_filetrans(hadoop_t, hadoop_tmp_t, { dir file })
+
+allow hadoop_t hadoop_domain:process signull;
+
+kernel_read_network_state(hadoop_t)
+kernel_read_system_state(hadoop_t)
+
+corecmd_exec_bin(hadoop_t)
+corecmd_exec_shell(hadoop_t)
+
+corenet_all_recvfrom_unlabeled(hadoop_t)
+corenet_all_recvfrom_netlabel(hadoop_t)
+corenet_sendrecv_hadoop_namenode_client_packets(hadoop_t)
+corenet_sendrecv_portmap_client_packets(hadoop_t)
+corenet_sendrecv_zope_client_packets(hadoop_t)
+corenet_tcp_bind_all_nodes(hadoop_t)
+corenet_tcp_connect_hadoop_namenode_port(hadoop_t)
+corenet_tcp_connect_portmap_port(hadoop_t)
+corenet_tcp_connect_zope_port(hadoop_t)
+corenet_tcp_sendrecv_all_nodes(hadoop_t)
+corenet_tcp_sendrecv_all_ports(hadoop_t)
+corenet_tcp_sendrecv_generic_if(hadoop_t)
+corenet_udp_bind_all_nodes(hadoop_t)
+corenet_udp_sendrecv_all_nodes(hadoop_t)
+corenet_udp_sendrecv_all_ports(hadoop_t)
+corenet_udp_sendrecv_generic_if(hadoop_t)
+
+dev_read_rand(hadoop_t)
+dev_read_sysfs(hadoop_t)
+dev_read_urand(hadoop_t)
+
+files_dontaudit_search_spool(hadoop_t)
+files_read_usr_files(hadoop_t)
+files_read_all_files(hadoop_t)
+
+fs_getattr_xattr_fs(hadoop_t)
+
+# This can be removed on anything post-el5
+libs_use_ld_so(hadoop_t)
+libs_use_shared_libs(hadoop_t)
+
+miscfiles_read_localization(hadoop_t)
+
+userdom_dontaudit_search_user_home_dirs(hadoop_t)
+
+optional_policy(`
+ # Java might not be optional
+ java_exec(hadoop_t)
+')
+
+optional_policy(`
+ nis_use_ypbind(hadoop_t)
+')
+
+optional_policy(`
+ nscd_socket_use(hadoop_t)
+')
+
+########################################
+#
+# Hadoop datanode policy.
+#
+
+########################################
+#
+# Hadoop jobtracker policy.
+#
+
+########################################
+#
+# Hadoop namenode policy.
+#
+
+########################################
+#
+# Hadoop secondary namenode policy.
+#
+
+########################################
+#
+# Hadoop tasktracker policy.
+#
+
+########################################
+#
+# Hadoop zookeeper client policy.
+#
+
+allow zookeeper_t self:process { getsched sigkill signal signull };
+allow zookeeper_t self:fifo_file rw_fifo_file_perms;
+allow zookeeper_t self:tcp_socket create_stream_socket_perms;
+allow zookeeper_t self:udp_socket create_socket_perms;
+
+read_files_pattern(zookeeper_t, zookeeper_etc_t, zookeeper_etc_t)
+read_lnk_files_pattern(zookeeper_t, zookeeper_etc_t, zookeeper_etc_t)
+
+setattr_dirs_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t)
+append_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t)
+create_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t)
+read_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t)
+setattr_files_pattern(zookeeper_t, zookeeper_log_t, zookeeper_log_t)
+logging_log_filetrans(zookeeper_t, zookeeper_log_t, file)
+
+manage_files_pattern(zookeeper_t, zookeeper_tmp_t, zookeeper_tmp_t)
+files_tmp_filetrans(zookeeper_t, zookeeper_tmp_t, file)
+
+allow zookeeper_t zookeeper_server_t:process signull;
+
+can_exec(zookeeper_t, zookeeper_exec_t)
+
+kernel_read_network_state(zookeeper_t)
+kernel_read_system_state(zookeeper_t)
+
+corecmd_exec_bin(zookeeper_t)
+corecmd_exec_shell(zookeeper_t)
+
+corenet_all_recvfrom_unlabeled(zookeeper_t)
+corenet_all_recvfrom_netlabel(zookeeper_t)
+corenet_sendrecv_zookeeper_client_client_packets(zookeeper_t)
+corenet_tcp_bind_all_nodes(zookeeper_t)
+corenet_tcp_connect_zookeeper_client_port(zookeeper_t)
+corenet_tcp_sendrecv_all_nodes(zookeeper_t)
+corenet_tcp_sendrecv_all_ports(zookeeper_t)
+corenet_tcp_sendrecv_generic_if(zookeeper_t)
+corenet_udp_bind_all_nodes(zookeeper_t)
+corenet_udp_sendrecv_all_nodes(zookeeper_t)
+corenet_udp_sendrecv_all_ports(zookeeper_t)
+corenet_udp_sendrecv_generic_if(zookeeper_t)
+
+dev_read_rand(zookeeper_t)
+dev_read_sysfs(zookeeper_t)
+dev_read_urand(zookeeper_t)
+
+files_read_etc_files(zookeeper_t)
+files_read_usr_files(zookeeper_t)
+
+# This can be removed on anything post-el5
+libs_use_ld_so(zookeeper_t)
+libs_use_shared_libs(zookeeper_t)
+
+miscfiles_read_localization(zookeeper_t)
+
+sysnet_read_config(zookeeper_t)
+
+userdom_dontaudit_search_user_home_dirs(zookeeper_t)
+userdom_use_user_terminals(zookeeper_t)
+
+optional_policy(`
+ # Java might not be optional
+ java_exec(zookeeper_t)
+')
+
+optional_policy(`
+ nscd_socket_use(zookeeper_t)
+')
+
+########################################
+#
+# Hadoop zookeeper server policy.
+#
+
+allow zookeeper_server_t self:capability kill;
+allow zookeeper_server_t self:process { getsched sigkill signal signull };
+allow zookeeper_server_t self:fifo_file rw_fifo_file_perms;
+allow zookeeper_server_t self:netlink_route_socket rw_netlink_socket_perms;
+
+read_files_pattern(zookeeper_server_t, zookeeper_etc_t, zookeeper_etc_t)
+read_lnk_files_pattern(zookeeper_server_t, zookeeper_etc_t, zookeeper_etc_t)
+
+manage_dirs_pattern(zookeeper_server_t, zookeeper_server_var_t, zookeeper_server_var_t)
+manage_files_pattern(zookeeper_server_t, zookeeper_server_var_t, zookeeper_server_var_t)
+files_var_lib_filetrans(zookeeper_server_t, zookeeper_server_var_t, { dir file })
+
+setattr_dirs_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t)
+append_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t)
+create_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t)
+read_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t)
+setattr_files_pattern(zookeeper_server_t, zookeeper_log_t, zookeeper_log_t)
+logging_log_filetrans(zookeeper_server_t, zookeeper_log_t, file)
+
+manage_files_pattern(zookeeper_server_t, zookeeper_server_var_run_t, zookeeper_server_var_run_t)
+files_pid_filetrans(zookeeper_server_t, zookeeper_server_var_run_t, file)
+
+manage_files_pattern(zookeeper_server_t, zookeeper_server_tmp_t, zookeeper_server_tmp_t)
+files_tmp_filetrans(zookeeper_server_t, zookeeper_server_tmp_t, file)
+
+can_exec(zookeeper_server_t, zookeeper_server_exec_t)
+
+kernel_read_network_state(zookeeper_server_t)
+kernel_read_system_state(zookeeper_server_t)
+
+corecmd_exec_bin(zookeeper_server_t)
+corecmd_exec_shell(zookeeper_server_t)
+
+corenet_all_recvfrom_unlabeled(zookeeper_server_t)
+corenet_all_recvfrom_netlabel(zookeeper_server_t)
+corenet_sendrecv_zookeeper_election_client_packets(zookeeper_server_t)
+corenet_sendrecv_zookeeper_leader_client_packets(zookeeper_server_t)
+corenet_sendrecv_zookeeper_client_server_packets(zookeeper_server_t)
+corenet_sendrecv_zookeeper_election_server_packets(zookeeper_server_t)
+corenet_sendrecv_zookeeper_leader_server_packets(zookeeper_server_t)
+corenet_tcp_bind_all_nodes(zookeeper_server_t)
+corenet_tcp_bind_zookeeper_client_port(zookeeper_server_t)
+corenet_tcp_bind_zookeeper_election_port(zookeeper_server_t)
+corenet_tcp_bind_zookeeper_leader_port(zookeeper_server_t)
+corenet_tcp_connect_zookeeper_election_port(zookeeper_server_t)
+corenet_tcp_connect_zookeeper_leader_port(zookeeper_server_t)
+corenet_tcp_sendrecv_generic_if(zookeeper_server_t)
+corenet_tcp_sendrecv_generic_node(zookeeper_server_t)
+corenet_tcp_sendrecv_all_ports(zookeeper_server_t)
+
+dev_read_rand(zookeeper_server_t)
+dev_read_sysfs(zookeeper_server_t)
+dev_read_urand(zookeeper_server_t)
+
+files_read_etc_files(zookeeper_server_t)
+files_read_usr_files(zookeeper_server_t)
+
+fs_getattr_xattr_fs(zookeeper_server_t)
+
+# This can be removed on anything post-el5
+libs_use_ld_so(zookeeper_server_t)
+libs_use_shared_libs(zookeeper_server_t)
+
+logging_send_syslog_msg(zookeeper_server_t)
+
+miscfiles_read_localization(zookeeper_server_t)
--
1.7.2.3

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
Url : http://oss.tresys.com/pipermail/refpolicy/attachments/20100921/58913164/attachment.bin


2010-09-21 15:42:00

by Paul Nuzzi

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

On 09/21/2010 05:02 AM, Dominick Grift wrote:
> Well ive rewritten the policy as much as i ca with the information that i currently have.
> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.

The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.

> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
> A lot of other things that arent, clear and/ or make no sense.
> I have also left out things that i think, should be handled differently.

hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster.

> It would be cool if someone could test this policy and provide feedback in the shape of avc denials.

I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module.
Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified.

#============= zookeeper_server_t ==============
allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans };
allow zookeeper_server_t net_conf_t:file { read getattr open };
allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect };
allow zookeeper_server_t self:process execmem;
allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen };

> Some properties of this policy:
>
> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
> The zookeeper server seems to be an ordinary init daemon domain.
> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.
>
> Signed-off-by: Dominick Grift <[email protected]>
> _______________________________________________
> refpolicy mailing list
> refpolicy at oss.tresys.com
> http://oss.tresys.com/mailman/listinfo/refpolicy

2010-09-21 16:14:52

by domg472

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

On 09/21/2010 05:42 PM, Paul Nuzzi wrote:
> On 09/21/2010 05:02 AM, Dominick Grift wrote:
>> Well ive rewritten the policy as much as i ca with the information that i currently have.
>> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.
>
> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.

With that in mind, it looks like the policy has some duplicate rules.

>> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
>> A lot of other things that arent, clear and/ or make no sense.
>> I have also left out things that i think, should be handled differently.
>
> hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster.

And who what runs it? who or/and what transitions to the hadoop_t domain?

>> It would be cool if someone could test this policy and provide feedback in the shape of avc denials.
>
> I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module.
> Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified.

Thanks i fixed that file context specification now.

Were you able to run the init script domains in permissive mode? Does it
work when you use run_init? Do the initrc domains properly transition to
the main domains in permissive mode?

Could you provides some avc denials of that?

You should also specify file contexts for the pid files and lock files.

>
> #============= zookeeper_server_t ==============
> allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans };
> allow zookeeper_server_t net_conf_t:file { read getattr open };
> allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect };

What port is it connecting and binding sockets to? Why are they not
labelled?

> allow zookeeper_server_t self:process execmem;
> allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen };
>

I will add the above rules to the policy that i have, except for the
bind/connect to generic port types as this seems like a bad idea to me.

Were there no denials left for the zookeeper client? Did you use
zookeeper_run_client() to transition to the zookeeper_t domain?

>> Some properties of this policy:
>>
>> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
>> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
>> The zookeeper server seems to be an ordinary init daemon domain.
>> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.
>>
>> Signed-off-by: Dominick Grift <[email protected]>
>> _______________________________________________
>> refpolicy mailing list
>> refpolicy at oss.tresys.com
>> http://oss.tresys.com/mailman/listinfo/refpolicy


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 261 bytes
Desc: OpenPGP digital signature
Url : http://oss.tresys.com/pipermail/refpolicy/attachments/20100921/d41757b3/attachment.bin

2010-09-21 16:34:34

by Paul Nuzzi

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

On 09/21/2010 12:14 PM, Dominick Grift wrote:
> On 09/21/2010 05:42 PM, Paul Nuzzi wrote:
>> On 09/21/2010 05:02 AM, Dominick Grift wrote:
>>> Well ive rewritten the policy as much as i ca with the information that i currently have.
>>> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.
>>
>> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
>> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.
>
> With that in mind, it looks like the policy has some duplicate rules.
>
>>> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
>>> A lot of other things that arent, clear and/ or make no sense.
>>> I have also left out things that i think, should be handled differently.
>>
>> hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster.
>
> And who what runs it? who or/and what transitions to the hadoop_t domain?

All of the users run it. Users and the sysadm need to transition to the hadoop_t domain.

>>> It would be cool if someone could test this policy and provide feedback in the shape of avc denials.
>>
>> I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module.
>> Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified.
>
> Thanks i fixed that file context specification now.
>
> Were you able to run the init script domains in permissive mode? Does it
> work when you use run_init? Do the initrc domains properly transition to
> the main domains in permissive mode?

None of the pseudo initrc domains transitioned to the target domain using run_init.

> Could you provides some avc denials of that?

There doesn't seem to be any denials for the domtrans.

> You should also specify file contexts for the pid files and lock files.

system_u:system_r:hadoop_datanode_initrc_t:s0 0 S 489 3125 1 1 80 0 - 579640 futex_ ? 00:00:02 java
system_u:system_r:hadoop_namenode_initrc_t:s0 0 S 489 3376 1 2 80 0 - 581189 futex_ ? 00:00:02 java
system_u:system_r:zookeeper_server_t:s0 0 S 488 3598 1 0 80 0 - 496167 futex_ ? 00:00:00 java

-rw-r--r--. hadoop hadoop system_u:object_r:hadoop_datanode_var_run_t:s0 hadoop-hadoop-datanode.pid
-rw-r--r--. hadoop hadoop system_u:object_r:hadoop_namenode_var_run_t:s0 hadoop-hadoop-namenode.pid

-rw-r--r--. root root system_u:object_r:hadoop_datanode_initrc_lock_t:s0 /var/lock/subsys/hadoop-datanode
-rw-r--r--. root root system_u:object_r:hadoop_namenode_initrc_lock_t:s0 /var/lock/subsys/hadoop-namenode

>>
>> #============= zookeeper_server_t ==============
>> allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans };
>> allow zookeeper_server_t net_conf_t:file { read getattr open };
>> allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect };
>
> What port is it connecting and binding sockets to? Why are they not
> labelled?

I left out the networking since I built it as a module. I haven't had luck running refpolicy on Fedora. The corenet_* functions might need to be written if refpolicy doesn't want all the ports permanently defined.

>> allow zookeeper_server_t self:process execmem;
>> allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen };
>>
>
> I will add the above rules to the policy that i have, except for the
> bind/connect to generic port types as this seems like a bad idea to me.

I think I left out binding to generic ports in my policy.

> Were there no denials left for the zookeeper client? Did you use
> zookeeper_run_client() to transition to the zookeeper_t domain?

zookeeper_client transitioned to the unconfined_java_t domain so there were no denials. I ran your patched policy without any modifications.

>>> Some properties of this policy:
>>>
>>> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
>>> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
>>> The zookeeper server seems to be an ordinary init daemon domain.
>>> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.
>>>
>>> Signed-off-by: Dominick Grift <[email protected]>
>>> _______________________________________________
>>> refpolicy mailing list
>>> refpolicy at oss.tresys.com
>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>
>
>
>
> _______________________________________________
> refpolicy mailing list
> refpolicy at oss.tresys.com
> http://oss.tresys.com/mailman/listinfo/refpolicy

2010-09-21 17:08:41

by domg472

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

On 09/21/2010 06:34 PM, Paul Nuzzi wrote:
> On 09/21/2010 12:14 PM, Dominick Grift wrote:
>> On 09/21/2010 05:42 PM, Paul Nuzzi wrote:
>>> On 09/21/2010 05:02 AM, Dominick Grift wrote:
>>>> Well ive rewritten the policy as much as i ca with the information that i currently have.
>>>> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.
>>>
>>> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
>>> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.
>>
>> With that in mind, it looks like the policy has some duplicate rules.
>>
>>>> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
>>>> A lot of other things that arent, clear and/ or make no sense.
>>>> I have also left out things that i think, should be handled differently.
>>>
>>> hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster.
>>
>> And who what runs it? who or/and what transitions to the hadoop_t domain?
>
> All of the users run it. Users and the sysadm need to transition to the hadoop_t domain.

ok so you can transition to it by creating a custom module with the
following:

hadoop_run(sysadm_t, sysadm_r)

Can you confirm that this works?

>>>> It would be cool if someone could test this policy and provide feedback in the shape of avc denials.
>>>
>>> I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module.
>>> Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified.
>>
>> Thanks i fixed that file context specification now.
>>
>> Were you able to run the init script domains in permissive mode? Does it
>> work when you use run_init? Do the initrc domains properly transition to
>> the main domains in permissive mode?
>
> None of the pseudo initrc domains transitioned to the target domain using run_init.

Any avc denials related to this? Because the domain transitions are
specified in policy (example: hadoop_datanode_initrc_t -> hadoop_exec_t
-> hadoop_datanode_t)

>
>> Could you provides some avc denials of that?

> There doesn't seem to be any denials for the domtrans.

So the domain transition does not occur but no avc denials are showed?
that is strange.
maybe semodule -DB will expose some related information. Also check for
SELINUX_ERR (grep -i SELINUX_ERR /var/log/audit/audit.log)

Are the executables properly labelled?

>
>> You should also specify file contexts for the pid files and lock files.
>
> system_u:system_r:hadoop_datanode_initrc_t:s0 0 S 489 3125 1 1 80 0 - 579640 futex_ ? 00:00:02 java
> system_u:system_r:hadoop_namenode_initrc_t:s0 0 S 489 3376 1 2 80 0 - 581189 futex_ ? 00:00:02 java
> system_u:system_r:zookeeper_server_t:s0 0 S 488 3598 1 0 80 0 - 496167 futex_ ? 00:00:00 java
>
> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_datanode_var_run_t:s0 hadoop-hadoop-datanode.pid
> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_namenode_var_run_t:s0 hadoop-hadoop-namenode.pid

>
> -rw-r--r--. root root system_u:object_r:hadoop_datanode_initrc_lock_t:s0 /var/lock/subsys/hadoop-datanode
> -rw-r--r--. root root system_u:object_r:hadoop_namenode_initrc_lock_t:s0 /var/lock/subsys/hadoop-namenode
>
>>>
>>> #============= zookeeper_server_t ==============
>>> allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans };
>>> allow zookeeper_server_t net_conf_t:file { read getattr open };
>>> allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect };
>>
>> What port is it connecting and binding sockets to? Why are they not
>> labelled?
>
> I left out the networking since I built it as a module. I haven't had luck running refpolicy on Fedora. The corenet_* functions might need to be written if refpolicy doesn't want all the ports permanently defined.

You could patch fedoras selinux-policy rpm that is what i usually do.
Anyways i will just assume its the ports we declared.
>
>>> allow zookeeper_server_t self:process execmem;
>>> allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen };
>>>
>>
>> I will add the above rules to the policy that i have, except for the
>> bind/connect to generic port types as this seems like a bad idea to me.
>
> I think I left out binding to generic ports in my policy.
>
>> Were there no denials left for the zookeeper client? Did you use
>> zookeeper_run_client() to transition to the zookeeper_t domain?
>
> zookeeper_client transitioned to the unconfined_java_t domain so there were no denials. I ran your patched policy without any modifications.
>

Because you probably ran is in the unconfined domain. You should use the
zookeeper_run_client() that my patch provides, so that you can
transition to the confined domain.

I will add file context specifications for the locks and pids you have
reported.

>>>> Some properties of this policy:
>>>>
>>>> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
>>>> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
>>>> The zookeeper server seems to be an ordinary init daemon domain.
>>>> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.
>>>>
>>>> Signed-off-by: Dominick Grift <[email protected]>
>>>> _______________________________________________
>>>> refpolicy mailing list
>>>> refpolicy at oss.tresys.com
>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>>
>>
>>
>>
>> _______________________________________________
>> refpolicy mailing list
>> refpolicy at oss.tresys.com
>> http://oss.tresys.com/mailman/listinfo/refpolicy
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 261 bytes
Desc: OpenPGP digital signature
Url : http://oss.tresys.com/pipermail/refpolicy/attachments/20100921/e6e993c9/attachment.bin

2010-09-21 19:55:12

by jsolt

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined


> >> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
> >> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.
> >
Could you send an updated patch so I can see these changes?


--
Jeremy J. Solt
Tresys Technology, LLC
410-290-1411 x122

2010-09-23 13:54:28

by Paul Nuzzi

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

On 09/21/2010 01:08 PM, Dominick Grift wrote:
> On 09/21/2010 06:34 PM, Paul Nuzzi wrote:
>> On 09/21/2010 12:14 PM, Dominick Grift wrote:
>>> On 09/21/2010 05:42 PM, Paul Nuzzi wrote:
>>>> On 09/21/2010 05:02 AM, Dominick Grift wrote:
>>>>> Well ive rewritten the policy as much as i ca with the information that i currently have.
>>>>> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.
>>>>
>>>> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
>>>> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.
>>>
>>> With that in mind, it looks like the policy has some duplicate rules.
>>>
>>>>> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
>>>>> A lot of other things that arent, clear and/ or make no sense.
>>>>> I have also left out things that i think, should be handled differently.
>>>>
>>>> hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster.
>>>
>>> And who what runs it? who or/and what transitions to the hadoop_t domain?
>>
>> All of the users run it. Users and the sysadm need to transition to the hadoop_t domain.
>
> ok so you can transition to it by creating a custom module with the
> following:
>
> hadoop_run(sysadm_t, sysadm_r)
>
> Can you confirm that this works?

They are transitioning correctly for sysadm_u.

>>>>> It would be cool if someone could test this policy and provide feedback in the shape of avc denials.
>>>>
>>>> I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module.
>>>> Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified.
>>>
>>> Thanks i fixed that file context specification now.
>>>
>>> Were you able to run the init script domains in permissive mode? Does it
>>> work when you use run_init? Do the initrc domains properly transition to
>>> the main domains in permissive mode?
>>
>> None of the pseudo initrc domains transitioned to the target domain using run_init.
>
> Any avc denials related to this? Because the domain transitions are
> specified in policy (example: hadoop_datanode_initrc_t -> hadoop_exec_t
> -> hadoop_datanode_t)
>
>>
>>> Could you provides some avc denials of that?
>
>> There doesn't seem to be any denials for the domtrans.
>
> So the domain transition does not occur but no avc denials are showed?
> that is strange.
> maybe semodule -DB will expose some related information. Also check for
> SELINUX_ERR (grep -i SELINUX_ERR /var/log/audit/audit.log)
>
> Are the executables properly labelled?

I don't know if you changed anything with the new patch but they seem to be transitioning correctly.
I added a separate module with hadoop_run(sysadm_t, sysadm_r) and hadoop_run(unconfined_t, unconfined_r).
To get it to compile you need to add a gen_require hadoop_exec_t to hadoop_domtrans.

>>
>>> You should also specify file contexts for the pid files and lock files.
>>
>> system_u:system_r:hadoop_datanode_initrc_t:s0 0 S 489 3125 1 1 80 0 - 579640 futex_ ? 00:00:02 java
>> system_u:system_r:hadoop_namenode_initrc_t:s0 0 S 489 3376 1 2 80 0 - 581189 futex_ ? 00:00:02 java
>> system_u:system_r:zookeeper_server_t:s0 0 S 488 3598 1 0 80 0 - 496167 futex_ ? 00:00:00 java
>>
>> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_datanode_var_run_t:s0 hadoop-hadoop-datanode.pid
>> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_namenode_var_run_t:s0 hadoop-hadoop-namenode.pid
>
>>
>> -rw-r--r--. root root system_u:object_r:hadoop_datanode_initrc_lock_t:s0 /var/lock/subsys/hadoop-datanode
>> -rw-r--r--. root root system_u:object_r:hadoop_namenode_initrc_lock_t:s0 /var/lock/subsys/hadoop-namenode
>>
>>>>
>>>> #============= zookeeper_server_t ==============
>>>> allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans };
>>>> allow zookeeper_server_t net_conf_t:file { read getattr open };
>>>> allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect };
>>>
>>> What port is it connecting and binding sockets to? Why are they not
>>> labelled?
>>
>> I left out the networking since I built it as a module. I haven't had luck running refpolicy on Fedora. The corenet_* functions might need to be written if refpolicy doesn't want all the ports permanently defined.
>
> You could patch fedoras selinux-policy rpm that is what i usually do.
> Anyways i will just assume its the ports we declared.
>>
>>>> allow zookeeper_server_t self:process execmem;
>>>> allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen };
>>>>
>>>
>>> I will add the above rules to the policy that i have, except for the
>>> bind/connect to generic port types as this seems like a bad idea to me.
>>
>> I think I left out binding to generic ports in my policy.
>>
>>> Were there no denials left for the zookeeper client? Did you use
>>> zookeeper_run_client() to transition to the zookeeper_t domain?
>>
>> zookeeper_client transitioned to the unconfined_java_t domain so there were no denials. I ran your patched policy without any modifications.
>>
>
> Because you probably ran is in the unconfined domain. You should use the
> zookeeper_run_client() that my patch provides, so that you can
> transition to the confined domain.

Looks like that transitions correctly when I add zookeeper_run_client(unconfined_t, unconfined_r).
Thanks for taking an interest in the patch. How do we want to merge your changes with mine?

> I will add file context specifications for the locks and pids you have
> reported.
>
>>>>> Some properties of this policy:
>>>>>
>>>>> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
>>>>> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
>>>>> The zookeeper server seems to be an ordinary init daemon domain.
>>>>> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.
>>>>>
>>>>> Signed-off-by: Dominick Grift <[email protected]>
>>>>> _______________________________________________
>>>>> refpolicy mailing list
>>>>> refpolicy at oss.tresys.com
>>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> refpolicy mailing list
>>> refpolicy at oss.tresys.com
>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>>
>
>

2010-09-23 14:40:44

by domg472

[permalink] [raw]
Subject: [refpolicy] [PATCH] hadoop 1/10 -- unconfined

On 09/23/2010 03:54 PM, Paul Nuzzi wrote:
> On 09/21/2010 01:08 PM, Dominick Grift wrote:
>> On 09/21/2010 06:34 PM, Paul Nuzzi wrote:
>>> On 09/21/2010 12:14 PM, Dominick Grift wrote:
>>>> On 09/21/2010 05:42 PM, Paul Nuzzi wrote:
>>>>> On 09/21/2010 05:02 AM, Dominick Grift wrote:
>>>>>> Well ive rewritten the policy as much as i ca with the information that i currently have.
>>>>>> Because of the use of the hadoop domain attributes i cannot determine whether it is the initrc script doing something or the application, and so i cannot currently finish the hadoop_domain_template policy.
>>>>>
>>>>> The hadoop_domain policy is basic stuff that most programs share, plus a few hadoop specific things. I initially had separate functions for initrc and hadoop type policy.
>>>>> Since we are not exporting hadoop specific functionality to other modules I removed them from the .if file.
>>>>
>>>> With that in mind, it looks like the policy has some duplicate rules.
>>>>
>>>>>> Also i have no clue what transitions to the hadoop_t domain. It does not own an initrc script so i gather it is no init daemon domain. Must be an application domain then?
>>>>>> A lot of other things that arent, clear and/ or make no sense.
>>>>>> I have also left out things that i think, should be handled differently.
>>>>>
>>>>> hadoop_t is for the hadoop executable which is /usr/bin/hadoop. It does basic file system stuff, submits jobs and administers the cluster.
>>>>
>>>> And who what runs it? who or/and what transitions to the hadoop_t domain?
>>>
>>> All of the users run it. Users and the sysadm need to transition to the hadoop_t domain.
>>
>> ok so you can transition to it by creating a custom module with the
>> following:
>>
>> hadoop_run(sysadm_t, sysadm_r)
>>
>> Can you confirm that this works?
>
> They are transitioning correctly for sysadm_u.

Thanks
>
>>>>>> It would be cool if someone could test this policy and provide feedback in the shape of avc denials.
>>>>>
>>>>> I was able to get zookeeper server and client to run. Here is the audit2allow in permissive mode. Ignore the networking avcs. I didn't port the networking functions since it was built as a module.
>>>>> Zookeeper client doesn't domtrans into a domain. There is an semodule insert error. hadoop_tasktracker_data_t needs to be modified.
>>>>
>>>> Thanks i fixed that file context specification now.
>>>>
>>>> Were you able to run the init script domains in permissive mode? Does it
>>>> work when you use run_init? Do the initrc domains properly transition to
>>>> the main domains in permissive mode?
>>>
>>> None of the pseudo initrc domains transitioned to the target domain using run_init.
>>
>> Any avc denials related to this? Because the domain transitions are
>> specified in policy (example: hadoop_datanode_initrc_t -> hadoop_exec_t
>> -> hadoop_datanode_t)
>>
>>>
>>>> Could you provides some avc denials of that?
>>
>>> There doesn't seem to be any denials for the domtrans.
>>
>> So the domain transition does not occur but no avc denials are showed?
>> that is strange.
>> maybe semodule -DB will expose some related information. Also check for
>> SELINUX_ERR (grep -i SELINUX_ERR /var/log/audit/audit.log)
>>
>> Are the executables properly labelled?
>
> I don't know if you changed anything with the new patch but they seem to be transitioning correctly.
> I added a separate module with hadoop_run(sysadm_t, sysadm_r) and hadoop_run(unconfined_t, unconfined_r).
> To get it to compile you need to add a gen_require hadoop_exec_t to hadoop_domtrans.

Thanks fixed the hadoop_exec_t requirement. I did change some trivial
things, not sure if they are related to it though.

>
>>>
>>>> You should also specify file contexts for the pid files and lock files.
>>>
>>> system_u:system_r:hadoop_datanode_initrc_t:s0 0 S 489 3125 1 1 80 0 - 579640 futex_ ? 00:00:02 java
>>> system_u:system_r:hadoop_namenode_initrc_t:s0 0 S 489 3376 1 2 80 0 - 581189 futex_ ? 00:00:02 java
>>> system_u:system_r:zookeeper_server_t:s0 0 S 488 3598 1 0 80 0 - 496167 futex_ ? 00:00:00 java
>>>
>>> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_datanode_var_run_t:s0 hadoop-hadoop-datanode.pid
>>> -rw-r--r--. hadoop hadoop system_u:object_r:hadoop_namenode_var_run_t:s0 hadoop-hadoop-namenode.pid
>>
>>>
>>> -rw-r--r--. root root system_u:object_r:hadoop_datanode_initrc_lock_t:s0 /var/lock/subsys/hadoop-datanode
>>> -rw-r--r--. root root system_u:object_r:hadoop_namenode_initrc_lock_t:s0 /var/lock/subsys/hadoop-namenode
>>>
>>>>>
>>>>> #============= zookeeper_server_t ==============
>>>>> allow zookeeper_server_t java_exec_t:file { read getattr open execute execute_no_trans };
>>>>> allow zookeeper_server_t net_conf_t:file { read getattr open };
>>>>> allow zookeeper_server_t port_t:tcp_socket { name_bind name_connect };
>>>>
>>>> What port is it connecting and binding sockets to? Why are they not
>>>> labelled?
>>>
>>> I left out the networking since I built it as a module. I haven't had luck running refpolicy on Fedora. The corenet_* functions might need to be written if refpolicy doesn't want all the ports permanently defined.
>>
>> You could patch fedoras selinux-policy rpm that is what i usually do.
>> Anyways i will just assume its the ports we declared.
>>>
>>>>> allow zookeeper_server_t self:process execmem;
>>>>> allow zookeeper_server_t self:tcp_socket { setopt read bind create accept write getattr connect shutdown listen };
>>>>>
>>>>
>>>> I will add the above rules to the policy that i have, except for the
>>>> bind/connect to generic port types as this seems like a bad idea to me.
>>>
>>> I think I left out binding to generic ports in my policy.
>>>
>>>> Were there no denials left for the zookeeper client? Did you use
>>>> zookeeper_run_client() to transition to the zookeeper_t domain?
>>>
>>> zookeeper_client transitioned to the unconfined_java_t domain so there were no denials. I ran your patched policy without any modifications.
>>>
>>
>> Because you probably ran is in the unconfined domain. You should use the
>> zookeeper_run_client() that my patch provides, so that you can
>> transition to the confined domain.
>
> Looks like that transitions correctly when I add zookeeper_run_client(unconfined_t, unconfined_r).
> Thanks for taking an interest in the patch. How do we want to merge your changes with mine?

Honestly i think there are some mistakes in your version, and i suspect
that i adopted some of those mistakes into my patch.

For example. in my patch i allow the hadoop rc scripts to create pid
files. but from your feedback i strongly suspect that the rc scripts do
not create the pid files:

>>> -rw-r--r--. hadoop hadoop
system_u:object_r:hadoop_datanode_var_run_t:s0 hadoop-hadoop-datanode.pid

This shows that a process running as the hadoop user created the hadoop
datanode pid file. If the rc script would have created it , it wouldnt
be hadoop but root i suspect.

My opinion is that it is best to start over and use my patch as a clean
base.

Then we can extend my patch with any raw AVC denials.

I will post a new patch soon with the hadoop_domtrans fix and with the
pid filetrans removed for rc script domains (and maybe remove some other
things that i dont fully trust.)

Just so you know, we also hang out on IRC. That medium is a bit better
for collaboration/ interactivity than maillists, especially with the
larger projects.

>
>> I will add file context specifications for the locks and pids you have
>> reported.
>>
>>>>>> Some properties of this policy:
>>>>>>
>>>>>> The hadoop init script domains must be started by the system, or by unconfined or sysadm_t by using run_init server <hadoop service>
>>>>>> To use the zookeeper client domain, the zookeeper_run_client domain must be called for a domain. (for example if you wish to run it as unconfined_t, you would call zookeeper_run_client(unconfined_t, unconfined_r)
>>>>>> The zookeeper server seems to be an ordinary init daemon domain.
>>>>>> Since i do not know what kind of dommain hadoop_t is, it is currently pretty much unreachable. I have created an hadoop_domtrans interface that can be called but currently no role is allowed the hadoop_t domain.
>>>>>>
>>>>>> Signed-off-by: Dominick Grift <[email protected]>
>>>>>> _______________________________________________
>>>>>> refpolicy mailing list
>>>>>> refpolicy at oss.tresys.com
>>>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> refpolicy mailing list
>>>> refpolicy at oss.tresys.com
>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>>>
>>
>>
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 261 bytes
Desc: OpenPGP digital signature
Url : http://oss.tresys.com/pipermail/refpolicy/attachments/20100923/287ad29b/attachment.bin