Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933631AbcDESKv (ORCPT ); Tue, 5 Apr 2016 14:10:51 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:2390 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933504AbcDESKs (ORCPT ); Tue, 5 Apr 2016 14:10:48 -0400 Date: Tue, 5 Apr 2016 14:08:22 -0400 From: Chris Mason To: Peter Zijlstra , Ingo Molnar , Matt Fleming , Mike Galbraith , Subject: [PATCH RFC] select_idle_sibling experiments Message-ID: <20160405180822.tjtyyc3qh4leflfj@floor.thefacebook.com> Mail-Followup-To: Chris Mason , Peter Zijlstra , Ingo Molnar , Matt Fleming , Mike Galbraith , linux-kernel@vger.kernel.org MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="5ar6mh6slwql53zk" Content-Disposition: inline User-Agent: Mutt/1.5.23.1 (2014-03-12) X-Originating-IP: [192.168.52.123] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-04-05_11:,, signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 28777 Lines: 967 --5ar6mh6slwql53zk Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Hi everyone, We're porting the fb kernel up to 4.5, and one of our last few out-of-tree patches is a hack to try harder to find idle cpus when waking up tasks. This helps in pretty much every workload we run, mostly because they all get tuned with a similar setup: 1) find the load where latencies stop being acceptable 2) Run the server at just a little less than that Usually this means our CPUs are just a little bit idle, and a poor scheduler decision to place a task on a busy CPU instead of an idle CPU ends up impacting our p99 latencies. Mike helped us with this last year, fixing up wake_wide() to improve things. But we still ended up having to go back to the old hack. I started with a small-ish program to benchmark wakeup latencies. The basic idea is a bunch of worker threads who sit around and burn CPU. Every once and a while they send a message to a message thread. The message thread records the time he woke up the worker, and the worker records the delta between that time and the time he actually got the CPU again. At the end it spits out a latency histogram. The only thing we record is the wakeup latency, there's no measurement of 'work done' or any of the normal things you'd expect in a benchmark. It has knobs for cpu think time, and for how long the messenger thread waits before replying. Here's how I'm running it with my patch: ./schbench -c 30000 -s 30000 -m 6 -t 24 -r 30 Latency percentiles (usec) 50.0000th: 50 75.0000th: 62 90.0000th: 73 95.0000th: 79 *99.0000th: 99 99.5000th: 761 99.9000th: 10160 Over=0, min=0, max=14659 This translates to cputime of 30ms, sleep time of 30ms, 6 messenger threads, 24 workers per messenger and a run time of 30 seconds. My box has two sockets, 24 cores each. Mainline varies a bit, but numbers like this are typical: ./schbench -c 30000 -s 30000 -m 6 -t 24 -r 30 Latency percentiles (usec) 50.0000th: 50 75.0000th: 63 90.0000th: 76 95.0000th: 85 *99.0000th: 4680 99.5000th: 10192 99.9000th: 10928 Over=0, min=0, max=21816 A high p99 in real application performance will block a new kernel for us. p99.5 and p99.9 are included just to show how long the tail really is. I've inlined schbench.c below and attached as a .gz file just in case exchange manages to munge it. Now, on to the patch. I pushed some code around and narrowed the problem down to select_idle_sibling() We have cores going into and out of idle fast enough that even this cut our latencies in half: static int select_idle_sibling(struct task_struct *p, int target) goto next; for_each_cpu(i, sched_group_cpus(sg)) { - if (i == target || !idle_cpu(i)) + if (!idle_cpu(i)) goto next; } IOW, by the time we get down to for_each_cpu(), the idle_cpu() check done at the top of the function is no longer valid. I tried a few variations on select_idle_sibling() that preserved the underlying goal of returning idle cores before idle SMT threads. They were all horrible in different ways, and none of them were fast. The patch below just makes select_idle_sibling pick the first idle thread it can find. When I ran it through production workloads here, it was faster than the patch we've been carrying around for the last few years. diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 56b7d4b..c41baa6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4974,7 +4974,6 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) static int select_idle_sibling(struct task_struct *p, int target) { struct sched_domain *sd; - struct sched_group *sg; int i = task_cpu(p); if (idle_cpu(target)) @@ -4990,24 +4989,14 @@ static int select_idle_sibling(struct task_struct *p, int target) * Otherwise, iterate the domains and find an elegible idle cpu. */ sd = rcu_dereference(per_cpu(sd_llc, target)); - for_each_lower_domain(sd) { - sg = sd->groups; - do { - if (!cpumask_intersects(sched_group_cpus(sg), - tsk_cpus_allowed(p))) - goto next; - - for_each_cpu(i, sched_group_cpus(sg)) { - if (i == target || !idle_cpu(i)) - goto next; - } + if (!sd) + goto done; - target = cpumask_first_and(sched_group_cpus(sg), - tsk_cpus_allowed(p)); + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (cpu_active(i) && idle_cpu(i)) { + target = i; goto done; -next: - sg = sg->next; - } while (sg != sd->groups); + } } done: return target; -------------------------------------------- /* * schbench.c * * Copyright (C) 2016 Facebook * Chris Mason * * GPLv2, portions copied from the kernel and from Jens Axboe's fio * * gcc -Wall -O0 -W schbench.c -o schbench -lpthread */ #include #include #include #include #include #include #include #include #include #include #include #include #define PLAT_BITS 8 #define PLAT_VAL (1 << PLAT_BITS) #define PLAT_GROUP_NR 19 #define PLAT_NR (PLAT_GROUP_NR * PLAT_VAL) #define PLAT_LIST_MAX 20 /* -m number of message threads */ static int message_threads = 2; /* -t number of workers per message thread */ static int worker_threads = 16; /* -r seconds */ static int runtime = 30; /* -s usec */ static int sleeptime = 30000; /* -c usec */ static unsigned long long cputime = 30000; /* -a, bool */ static int autobench = 0; /* the latency histogram uses this to pitch outliers */ static unsigned int max_us = 50000; /* main() sets this to the time when we should all stop doing work */ static struct timeval global_stop; /* the message threads flip this to true when they decide runtime is up */ static unsigned long stopping = 0; /* * one stat struct per thread data, when the workers sleep this records the * latency between when they are woken up and when they actually get the * CPU again. The message threads sum up the stats of all the workers and * then bubble them up to main() for printing */ struct stats { unsigned int plat[PLAT_NR]; unsigned int nr_samples; unsigned int max; unsigned int min; unsigned int over; }; /* this defines which latency profiles get printed */ #define PLIST_P99 4 static double plist[PLAT_LIST_MAX] = { 50.0, 75.0, 90.0, 95.0, 99.0, 99.5, 99.9 }; enum { HELP_LONG_OPT = 1, }; char *option_string = "am:t:s:c:r:"; static struct option long_options[] = { {"auto", no_argument, 0, 'a'}, {"message-threads", required_argument, 0, 'm'}, {"threads", required_argument, 0, 't'}, {"runtime", required_argument, 0, 'r'}, {"sleeptime", required_argument, 0, 's'}, {"cputime", required_argument, 0, 'c'}, {"help", no_argument, 0, HELP_LONG_OPT}, {0, 0, 0, 0} }; static void print_usage(void) { fprintf(stderr, "schbench usage:\n" "\t-d (--dispatch-threads): number of message threads (def: 2)\n" "\t-t (--threads): worker threads per message thread (def: 16)\n" "\t-r (--runtime): How long to run before exiting (seconds, def: 30)\n" "\t-s (--sleeptime): Message thread latency (usec, def: 10000\n" "\t-c (--cputime): How long to think during loop (usec, def: 10000\n" ); exit(1); } static void parse_options(int ac, char **av) { int c; while (1) { int option_index = 0; c = getopt_long(ac, av, option_string, long_options, &option_index); if (c == -1) break; switch(c) { case 'a': autobench = 1; break; case 's': sleeptime = atoi(optarg); break; case 'c': cputime = atoi(optarg); break; case 'd': message_threads = atoi(optarg); break; case 't': worker_threads = atoi(optarg); break; case 'r': runtime = atoi(optarg); break; case '?': case HELP_LONG_OPT: print_usage(); break; default: break; } } if (optind < ac) { fprintf(stderr, "Error Extra arguments '%s'\n", av[optind]); exit(1); } } void tvsub(struct timeval * tdiff, struct timeval * t1, struct timeval * t0) { tdiff->tv_sec = t1->tv_sec - t0->tv_sec; tdiff->tv_usec = t1->tv_usec - t0->tv_usec; if (tdiff->tv_usec < 0 && tdiff->tv_sec > 0) { tdiff->tv_sec--; tdiff->tv_usec += 1000000; if (tdiff->tv_usec < 0) { fprintf(stderr, "lat_fs: tvsub shows test time ran backwards!\n"); exit(1); } } /* time shouldn't go backwards!!! */ if (tdiff->tv_usec < 0 || t1->tv_sec < t0->tv_sec) { tdiff->tv_sec = 0; tdiff->tv_usec = 0; } } /* * returns the difference between start and stop in usecs. Negative values * are turned into 0 */ unsigned long long tvdelta(struct timeval *start, struct timeval *stop) { struct timeval td; unsigned long long usecs; tvsub(&td, stop, start); usecs = td.tv_sec; usecs *= 1000000; usecs += td.tv_usec; return (usecs); } /* mr axboe's magic latency histogram */ static unsigned int plat_val_to_idx(unsigned int val) { unsigned int msb, error_bits, base, offset; /* Find MSB starting from bit 0 */ if (val == 0) msb = 0; else msb = sizeof(val)*8 - __builtin_clz(val) - 1; /* * MSB <= (PLAT_BITS-1), cannot be rounded off. Use * all bits of the sample as index */ if (msb <= PLAT_BITS) return val; /* Compute the number of error bits to discard*/ error_bits = msb - PLAT_BITS; /* Compute the number of buckets before the group */ base = (error_bits + 1) << PLAT_BITS; /* * Discard the error bits and apply the mask to find the * index for the buckets in the group */ offset = (PLAT_VAL - 1) & (val >> error_bits); /* Make sure the index does not exceed (array size - 1) */ return (base + offset) < (PLAT_NR - 1) ? (base + offset) : (PLAT_NR - 1); } /* * Convert the given index of the bucket array to the value * represented by the bucket */ static unsigned int plat_idx_to_val(unsigned int idx) { unsigned int error_bits, k, base; if (idx >= PLAT_NR) { fprintf(stderr, "idx %u is too large\n", idx); exit(1); } /* MSB <= (PLAT_BITS-1), cannot be rounded off. Use * all bits of the sample as index */ if (idx < (PLAT_VAL << 1)) return idx; /* Find the group and compute the minimum value of that group */ error_bits = (idx >> PLAT_BITS) - 1; base = 1 << (error_bits + PLAT_BITS); /* Find its bucket number of the group */ k = idx % PLAT_VAL; /* Return the mean of the range of the bucket */ return base + ((k + 0.5) * (1 << error_bits)); } static unsigned int calc_percentiles(unsigned int *io_u_plat, unsigned long nr, unsigned int **output) { unsigned long sum = 0; unsigned int len, i, j = 0; unsigned int oval_len = 0; unsigned int *ovals = NULL; int is_last; len = 0; while (len < PLAT_LIST_MAX && plist[len] != 0.0) len++; if (!len) return 0; /* * Calculate bucket values, note down max and min values */ is_last = 0; for (i = 0; i < PLAT_NR && !is_last; i++) { sum += io_u_plat[i]; while (sum >= (plist[j] / 100.0 * nr)) { if (j == oval_len) { oval_len += 100; ovals = realloc(ovals, oval_len * sizeof(unsigned int)); } ovals[j] = plat_idx_to_val(i); is_last = (j == len - 1); if (is_last) break; j++; } } *output = ovals; return len; } static int calc_p99(struct stats *s) { unsigned int *ovals = NULL; int ret = 0; int len; len = calc_percentiles(s->plat, s->nr_samples, &ovals); if (len && len > PLIST_P99) ret = ovals[PLIST_P99]; if (ovals) free(ovals); return ret; } static void show_latencies(struct stats *s) { unsigned int *ovals = NULL; unsigned int len, i; len = calc_percentiles(s->plat, s->nr_samples, &ovals); if (len) { fprintf(stderr, "Latency percentiles (usec)\n"); for (i = 0; i < len; i++) fprintf(stderr, "\t%s%2.4fth: %u\n", i == PLIST_P99 ? "*" : "", plist[i], ovals[i]); } if (ovals) free(ovals); fprintf(stderr, "\tOver=%u, min=%u, max=%u\n", s->over, s->min, s->max); } /* fold latency info from s into d */ void combine_stats(struct stats *d, struct stats *s) { int i; for (i = 0; i < PLAT_NR; i++) d->plat[i] += s->plat[i]; d->nr_samples += s->nr_samples; d->over += s->over; if (s->max > d->max) d->max = s->max; if (s->min < d->min) d->min = s->min; } /* record a latency result into the histogram */ static void add_lat(struct stats *s, unsigned int us) { int lat_index = 0; if (us > s->max) s->max = us; if (us < s->min) s->min = us; if (us > max_us) { fprintf(stderr, "latency=%u usec\n", us); s->over++; } lat_index = plat_val_to_idx(us); __sync_fetch_and_add(&s->plat[lat_index], 1); __sync_fetch_and_add(&s->nr_samples, 1); } /* * every thread has one of these, it comes out to about 19K thanks to the * giant stats struct */ struct thread_data { pthread_t tid; /* ->next is for placing us on the msg_thread's list for waking */ struct thread_data *next; /* our parent thread and messaging partner */ struct thread_data *msg_thread; /* * the msg thread stuffs gtod in here before waking us, so we can * measure scheduler latency */ struct timeval wake_time; /* keep the futex and the wake_time in the same cacheline */ int futex; /* mr axboe's magic latency histogram */ struct stats stats; }; /* we're so fancy we make our own futex wrappers */ #define FUTEX_BLOCKED 0 #define FUTEX_RUNNING 1 static int futex(int *uaddr, int futex_op, int val, const struct timespec *timeout, int *uaddr2, int val3) { return syscall(SYS_futex, uaddr, futex_op, val, timeout, uaddr2, val3); } /* * wakeup a process waiting on a futex, making sure they are really waiting * first */ static void fpost(int *futexp) { int s; if (__sync_bool_compare_and_swap(futexp, FUTEX_BLOCKED, FUTEX_RUNNING)) { s = futex(futexp, FUTEX_WAKE, 1, NULL, NULL, 0); if (s == -1) { perror("FUTEX_WAKE"); exit(1); } } } /* * wait on a futex, with an optional timeout. Make sure to set * the futex to FUTEX_BLOCKED beforehand. * * This will return zero if all went well, or return -ETIMEDOUT if you * hit the timeout without getting posted */ static int fwait(int *futexp, struct timespec *timeout) { int s; while (1) { /* Is the futex available? */ if (__sync_bool_compare_and_swap(futexp, FUTEX_RUNNING, FUTEX_BLOCKED)) { break; /* Yes */ } /* Futex is not available; wait */ s = futex(futexp, FUTEX_WAIT, FUTEX_BLOCKED, timeout, NULL, 0); if (s == -1 && errno != EAGAIN) { if (errno == ETIMEDOUT) return -ETIMEDOUT; perror("futex-FUTEX_WAIT"); exit(1); } } return 0; } /* * cmpxchg based list prepend */ static void xlist_add(struct thread_data *head, struct thread_data *add) { struct thread_data *old; struct thread_data *ret; while (1) { old = head->next; add->next = old; ret = __sync_val_compare_and_swap(&head->next, old, add); if (ret == old) break; } } /* * xchg based list splicing. This returns the entire list and * replaces the head->next with NULL */ static struct thread_data *xlist_splice(struct thread_data *head) { struct thread_data *old; struct thread_data *ret; while (1) { old = head->next; ret = __sync_val_compare_and_swap(&head->next, old, NULL); if (ret == old) break; } return ret; } /* * Wake everyone currently waiting on the message list, filling in their * thread_data->wake_time with the current time. * * It's not exactly the current time, it's really the time at the start of * the list run. We want to detect when the scheduler is just preempting the * waker and giving away the rest of its timeslice. So we gtod once at * the start of the loop and use that for all the threads we wake. */ static void xlist_wake_all(struct thread_data *td) { struct thread_data *list; struct thread_data *next; struct timeval now; list = xlist_splice(td); gettimeofday(&now, NULL); while (list) { next = list->next; list->next = NULL; memcpy(&list->wake_time, &now, sizeof(now)); fpost(&list->futex); list = next; } } /* * called by worker threads to send a message and wait for the answer. * In reality we're just trading one cacheline with the gtod and futex in * it, but that's good enough. We gtod after waking and use that to * record scheduler latency. */ static void msg_and_wait(struct thread_data *td) { struct timeval now; unsigned long long delta; struct timespec timeout; timeout.tv_sec = 0; timeout.tv_nsec = 5000 * 1000; /* set ourselves to blocked */ td->futex = FUTEX_BLOCKED; gettimeofday(&td->wake_time, NULL); /* add us to the list */ xlist_add(td->msg_thread, td); fpost(&td->msg_thread->futex); /* * don't wait if the main threads are shutting down, * they will never kick us fpost has a full barrier, so as long * as the message thread walks his list after setting stopping, * we shouldn't miss the wakeup */ if (!stopping) { /* if he hasn't already woken us up, wait */ fwait(&td->futex, NULL); } gettimeofday(&now, NULL); delta = tvdelta(&td->wake_time, &now); if (delta > 0) add_lat(&td->stats, delta); } /* * once the message thread starts all his children, this is where he * loops until our runtime is up. Basically this sits around waiting * for posting by the worker threads, replying to their messages after * a delay of 'sleeptime' + some jitter. */ static void run_msg_thread(struct thread_data *td) { struct timeval now; struct timespec timeout; unsigned int seed = pthread_self(); int max_jitter = sleeptime / 4; int jitter; jitter = rand_r(&seed) % max_jitter; timeout.tv_sec = 0; timeout.tv_nsec = (sleeptime + jitter) * 1000; while (1) { td->futex = FUTEX_BLOCKED; xlist_wake_all(td); gettimeofday(&now, NULL); if (now.tv_sec > global_stop.tv_sec) { stopping = 1; __sync_synchronize(); xlist_wake_all(td); break; } fwait(&td->futex, &timeout); /* * messages shouldn't be instant, sleep a little to make them * wait */ jitter = rand_r(&seed) % max_jitter; usleep(sleeptime + jitter); } } #define nop __asm__ __volatile__("rep;nop": : :"memory") static void usec_spin(unsigned long spin_time) { struct timeval now; struct timeval start; unsigned long long delta; gettimeofday(&start, NULL); while (1) { gettimeofday(&now, NULL); delta = tvdelta(&start, &now); if (delta > spin_time) return; nop; } } /* * the worker thread is pretty simple, it just does a single spin and * then waits on a message from the message thread */ void *worker_thread(void *arg) { struct thread_data *td = arg; while(1) { if (stopping) break; usec_spin(cputime); msg_and_wait(td); } return NULL; } /* * the message thread starts his own gaggle of workers and then sits around * replying when they post him. He collects latency stats as all the threads * exit */ void *message_thread(void *arg) { struct thread_data *td = arg; struct thread_data *worker_threads_mem = NULL; int i; int ret; worker_threads_mem = calloc(worker_threads, sizeof(struct thread_data)); if (!worker_threads_mem) { perror("unable to allocate ram"); pthread_exit((void *)-ENOMEM); } for (i = 0; i < worker_threads; i++) { pthread_t tid; worker_threads_mem[i].msg_thread = td; ret = pthread_create(&tid, NULL, worker_thread, worker_threads_mem + i); if (ret) { fprintf(stderr, "error %d from pthread_create\n", ret); exit(1); } worker_threads_mem[i].tid = tid; } run_msg_thread(td); for (i = 0; i < worker_threads; i++) { pthread_join(worker_threads_mem[i].tid, NULL); combine_stats(&td->stats, &worker_threads_mem[i].stats); } free(worker_threads_mem); return NULL; } int main(int ac, char **av) { int i; int ret; struct thread_data *message_threads_mem = NULL; struct stats stats; parse_options(ac, av); again: stopping = 0; memset(&stats, 0, sizeof(stats)); message_threads_mem = calloc(message_threads, sizeof(struct thread_data)); if (!message_threads_mem) { perror("unable to allocate ram"); exit(1); } gettimeofday(&global_stop, NULL); global_stop.tv_sec += runtime; /* start our message threads, each one starts its own workers */ for (i = 0; i < message_threads; i++) { pthread_t tid; ret = pthread_create(&tid, NULL, message_thread, message_threads_mem + i); if (ret) { fprintf(stderr, "error %d from pthread_create\n", ret); exit(1); } message_threads_mem[i].tid = tid; } for (i = 0; i < message_threads; i++) { pthread_join(message_threads_mem[i].tid, NULL); combine_stats(&stats, &message_threads_mem[i].stats); } free(message_threads_mem); /* * in auto bench mode, keep adding workers until our latencies get * horrible */ if (autobench) { int p99 = calc_p99(&stats); fprintf(stderr, "cputime %Lu threads %d p99 %d\n", cputime, worker_threads, p99); if (p99 < 2000) { worker_threads++; goto again; } } show_latencies(&stats); return 0; } --5ar6mh6slwql53zk Content-Type: application/octet-stream Content-Disposition: attachment; filename="schbench.c.gz" Content-Transfer-Encoding: base64 H4sICK/9A1cAA3NjaGJlbmNoLmMAtVt7d9s2lv9b+hSI58SWbMmx08eOn900ddNsHScncaYz J83hoUhIQkyRWgK07Hby3fc+ABKgqDzmbLuzjoTHBXDv7z4BPdrti12hk/lE5sl8P4Fv2PC0 WN6XajY3YvB0KB4fHH4vfo4TOSmKG+qel0qLF7EucnGaZIv/nk72k2Jxbmc/e3V5+3gklkVp VJFrkRRLJVMxLYuFMHMpbmSZy0zEuW37HwmDntxNCrmjxVQVlswsScT4tzjLxPjlAXzytinG Rf1NjLOlmZcyTmHSo/7fVJ5kVSrFqTapKvbn52FTpiZhm50cNk6T3GRhU5UrmB62ybLMW0vM pCmWprXsvX5k1EKGrest2pQqn4Vtmcqru0fTysi7daLw/wkwCDv6f0vlVOVSvLp8ch39+Pz6 Te/vYds/nlz2Bofi9LQZMgxHPHv98u2r6Op17/Ao7ICm3iAYAvJxRFtELp+/uY5ePPln7/FB v/9oV4wXIq8WE1mKYioWUut4JgXzXKPAtImNSoTKjeuNXO+ZeHxCJIzwaKyKEhCkxRK+hvRa 5HigR+3weyZXCqFlUuRrGyirHKUCQ7854KFaiArGtsbpTMplPRL+48HJ2uAq12qWA/izIp/x n2RZrc+MRwKUK2stE1emYIyfCRiII1F/sthA472YAyCLWRkvcFUNXaCUphBLZWBGUZlMIZs6 NkO8ju+iCpnyHe8CiS9ilQ+GwBvTUMMFab+ruczFSgo9L6osFaiXsPxSpAVglnjtLQVIrhJD E2/jTMyyYhJnEY5vjtHGwjRTy2bZsrJLwtB7kcpEAeidfGBMtdzIZlxmiZtirsF6aE4KACgO d3tD+FjYpLEBAbjVaoCRlHlHJcClTJErEmk5CUykWUlkS73RuMT5N/AVNogWzutKTAVcuxdg Ihyhp6/eingGbN8X4rqDJ7paICHcFe5dI/6R8/42YRUkZXChSTWZZDhd8rzCyXRalGIJ1sUA X/rMOeICU/2z3wvAsYQDvrOa//6k1ZmXkY4Xy0zqdg9gaq1J5e2m4laWJ/2PDgjAXTYfGnil ALmOucuymCpYhfhFe5ek4Y21QUvz6uhIfOtwkBYVHn+ZgWa8C6zRewDDn4D1/YOR+K/v8O8R fT7iz0f273f090jg7iSYHOTMLxeXr6LLl1fPopevrtGKjGjzyTwuxS7YevBxEVtu6NyKF8fm WB8nx+Xx1klLHXgwgTTiz/odbazf+3MLlX1rJPIiistZtZC5GQnY1E6883GE/RYaYwsNGFrK /61UKdPWhIWd8NmBxg60WrV5YGkH1mZv81Bth1ort3lgYgfOZbbsOHbAdBp5QO34v4/Ef8va 20KljA4wZ8CfATYM+8DSKbVOB+C1wU+PxFYdMdDA49/zrX6vt/W7GadiMB6nSi9jsJyOwcPj T/itAWDwWDweNjQM0mimsnLW4zt8FZM4/N6jUSINKw2g8UuxYoMGagytYG1Ai6WQdwq1WAys DxsJovTNgUdJI6VaXEDrRbi407EB+itL4BAdQUMiQRJWjK3NgNLmNyKtCPNZAU6gm47g/4Zg AXDTg0P49LElubjU0inDgJwe0GHd2o1vSZDYmoDEe2AfQLuBDGoMNVv1U3kq76y57/US+MRh WIQ7HiDF+HYkAl0FSMF/viqOxLZPbki01FQMgN6ZGB8OccYE2HdDPXqFfnaQ8GaSWEvU1WMc 5PvtwxNvmhuneZwfRsSmUANYH7Rg2DUl4SlN9PC5CSlPWI+pPjfR8MS16Olz80qe18RQn5vw A02gz4G6ExlfpVuzAWVxlZnjoPFjH/7XJ3mhFMHzngKUWDhrluCiLMEfXtyZMhbO7Gix81Dv AG4RLO+YxntaucYurABrEG7Nra4mg1aYA+4sVdPpqB3+QPthV+MBwZvmjM/NbYSx4xmMrb+M YYz7cuKPrIKhVTC24sHIidaEU3EgtrdFuOC5OGAuBc3j8UnQRPP3zli5MV7sbViAaa2zHCxO NNXHzDgMIlcQTUnN/BBlDOYtTm5WMURZD0AILPKG806+GDLgBI5C8x0jZoU388EDjBA2nf3f //aZe+oxt4MDbE7WeMCtBASOK0tpqjKn0FDgUFmC5ss6NARjVxqKBClaVjmlCBrivSs5Azt4 KwWgoZIaaWH0iNQ4UirEAYVqHUmEuU1lZuI1ANJq61DDpQlrrQ6T+sFZQ562iFaOYb5t0hFt f8THQYHQEIRgul/jk9t2fZRw054bZ7HJPGOvodkpYPpRithWARbxDDzEeqKzIZnBeDWCA0Wm iFR6Nwg6oZ3OHsalejISEs1ANFEGbP8EzBC4iOkUsp8TBtrPaEVevPmRD42+jsoVMB6w5GCG bAT3cIDeAYhafMhMy7pBqz9kMcWRw92/g55G0aRSGRCMkuwPaobGQ14UnOYurXl6JgZ1ng7O B5xinOeFAWCJsqjAQ6W4233xFldC7EBagEfBcIXyBYrRRawFuTMcY7eMuwLyXhWg5yQCm7GH f1oswNVQKuHFQcQxXgbgCSFTAmqHdBtWwoFxgXFD/5MUJ1Vyg/mmDW6wcwbno+yuh0IBegOP +p4A9+/XMDy+/cT7ISLeTlH54uUSMi9KO2N9g5ufonQxDcOZHEBgkoRD3J5U3uzH8o8BIpxw /vHkEmU3FNuMhPNzD1RDe/AX8Q2Io7Kn46XSAhIbFKe8SySIchCXZXxPUGGCuJhTE2LDngUn nN4ufvWah/4A8muPOQ7HnDT26mmRQwJm+GRgfnK7IwsbPrvg7dj8nwwU27plKbWkTGxy743v f1IzQSVRM4FMqJnQvq6ZvlLesF6eWMcO48X5mStJbfDtOOhhJaiMUIAJKWeSnDou1vbmLJ6/ QNtqXcPdnHpgAeQeDj2Fg37f2jTwR9AmnsZAIq0WkJCSLHjV2HiqEiggc+rcU3G2ME6hqAoY alUz1N8P9llENCob6ugN0COe1wVBS+A1n5BLPeDf7Uxw9TPZQpsHdovjweAG/h7sfweaILhq 6SkW47kTcEmcJRGkWwmgFKsHIeJ2VRFVEYJy1Coa5SVnBJSxhHN2i8qAHEKocqUJBMIGP5iR yRzwNhIfujoL9FMwoqtvFztRgFdvLy9POPNROspiTT6pmWUzIWw4DcuuGOBx/QM634sHMH6f nBN83dtzmvQAvnkoPPCM6FNgYIWe1wmHwxNM0aEtLVY51nkIn4BJF7xY78J7tZtEczpQ9EUo t08wSLDDB+5UQu3tsSIjLyFOqAX0TmHlyZ0Ue0H1B3y0D+/FIwwy9sEPg+SGNujEk31Ab+x4 bNt7Nc85gj2pG5HXkEFkWZEM6Puokc+uc9y+iIYcmKLpsBRwM2drdk7xuIYhvDGky/bYbtcO oNzSSy57H1BWddRrESj4ZLqJoICen1I3CnB0NAgqfLt63dJ2oa2UTnoWyA3u1jRLj89Zk+BD UxTENBrpDm0OgpNB4vjPeVOzs+BzJ3pXd7y305gGGvhSykFN0Z67xCCtVUrAnCLikFHh7r72 +B0q/P9x+A1u6tJVORuaHA8PXfbTVh8UBqlLZ371u3moHz7e/3Zq5sfg/9DlEaYUwq4plf4g tna3IDrYst2sT+r9yMpBvXeecbMQOkprv5uXEFScPaxGaBT43/jujPeBLMKiL32Abv43vqsD /2mRNSUplU8LDrQ1J0FU9CUJgz+cqFxGJNOWgNM66wkkThZ0sy2qGZqyOOH8aCN0/Q2mpr6A bW9QA0/5eLaLy9vEPj4mwD7l8/I62HRmWeANVGjKU/rgBqrcDlS5YxVfRIi4ZhfEY1VmmFPo UbtyJWJenKaoHW29GIXOrmrYRhbNL6zhTisNx9H1cbQ7TqVP6gGnds9uAB2j0gEJvn7aoBv2 bAAfykQJQxXpVM8ymKwjotTf5FoWSFOiSN/nSTSVJplH4LUiYMRg20m4ng8acPjJ4b6SBwG1 hP3cu8LqHCJAvGni+AZTSsgWAbeAHLDhGFDHE/xwePQrRnD5jbtkoxtvFedOMiwm/6qGV4jw sgrZZu+tI8zmMY3Ha8TzXN5hxMDXPVmcKErmYUcchumZredBio2KT+NW8Q0OQ//dsdIukrQR XVGVWLAFc+WOS1EAVRiRBPSZHBRhE6lmfS/esBtzFLWpIIURM1MgIMVcltLlhnajFQhAF3gV CTE6kYDwktIrncxlWmWwA4sgG5a0qh5AR0b4xR7rhq/5pKBbdjoT3a+5YS4PBPHjmrBIhvdP FPAAK2iWJfVlJYxeoIP0t74QW8kdPAmYwBjnrTBhhfQRWY+hF29xVUI+ay933XXYz2+vL/4Z /Xj58umvFz+Jg1b767dXV8+vnonDIFYgalR1360A56B7dWuE1R5bPkFXASDOtfErS3qJV934 CQDNY5nK43riN2RMrM+2jxUGb/71JqIlQKl50WZBXEzUJB01ouRpHEoGUyS8IkwAfdDAVyKA 81hY2gtGi8u7+WqWYr17Nx5pTVWpg/SVjOV0WWjDjCFyy9oq1mbMGgq8to8wUQPyZC/0Kl4O eNIoFAp73F4oERu7YhzC0gin/vbk1wuwNyMKUtzfg6Grv2ph7yY40F1SjjTYauZuKKR6rAT7 5LNtpcxcYLZGVyFYI2Rp7Au/jlHgIwF762wxCW0hBFlrwcal+/ZFzzVe9q4UpM4WEn/IshCK r7RXaFZWMgMAgFGyA8YX189fXPz08u01DrsvKiQzV6Z+loC2FLeM/86kIRSg7GTqy5RQjUf1 RTraCGZf2K1LJ9DQ59o3FrexyuJJJn8gxf5aZFgQWGT0Qg66tIbTAr5Lg/X/JTWv9ZH38zNt RHE5qd7PCYuWBm4G1/PrNkYb7VvHGkENg3l694S55cWTZ0+eX3npF/fAwFpwnNqsyfPExytt a9xsagNqvXS1BnCyWN4l8xnVDVJ2astSLmWerin1HfaSP+9yTnP42EDC74EZQfHc74PI9aS7 p+Q6cggfDHTPBC7FvhqPBuSt44ZkiMjZzMjiCCOaNRhtNzRGOGuE8V0tKppP1PxLS1/v2zzT kAhgtEDvUOjBS3OrgekJKD2Ns49NgMMQXUjub7bC1gOB4zO/izssC1pVbhTHX8n1/4TFeLDP 87idozLDf0PjSaEihodJVWIY1biiOkSz9/TIHnCMYCuxkwMQVbLFrY87Pm9iFOI8UrCkSY+d 4X1udly1OU6MLYT7AzFM3dHOPdZvvmLjnh6V4CWmzuATEsoKHy39hnFSToFtKo0EcdTPqJpo DPD0oWLFlIslHddGvLj/kiKumbrF9ngV8/ol3g1CFE1XDWigESmw4BuK/Cg8LPCeLa7dkNsl bxFfJSBdSCC4VIqhrns65S61Vxzl7W8wFcRdDFm6kGY2wxMnb8CnhV8rIs2LFZUaFJWKAt0w pNXk2sAuT9P4frANwxswulIgzGG0W1OCDQ3cm29NwaO3kItkCfS4s8bSSPAKtvwFn7nqxUGR HU0me+hIA1G7kmdjMNrji4LWaxgKH/BSpoY7vZRDf+WuYOJcr2S5T+DNCZfK3NvgmLBkyjhl vfFj8loLCCD0wpi9Y46UFKjUBLMwwAOgfVbAGJkX1WzOSOZJUyPrpChAkCnY9FEGvpZrrIMI 0x20JhR4fAGEfDB03crSlW8IHgpbrMem+1obrAVX2F5jzq345hOOcsgvPzGIwEstyDG0zG4l CWiSFckNP7jrmdTKG6YG0cIaMnGkhyOLUVoB/BOmorZMQaBB2o1TxrlNhgiBSGprTYS6sLfB X51HpgW+BiAQqam96SO7yZjD6F/PK44QsYo9ctnnPYekORpncaOSG9wmrUrZPEbGeNkTl6Wi GlaBtzwoEL4I0h2PWWEbGWT26EjZaxKmtI1P3ftU3sHKf8ywUFrXyWd96UgVezerjkOhFZ1v rHFinOG69+7tKb6PHXkRIAe/27UgG/OBRZRPWBfCHF7z2wcHbQnjaFfs5LHnfBfuyk00gVLc EQPYT+TIgnewj0y5JnONPEzAxKUlVmTpuSgmEVQXsI9xwdTDebGGSmly8EgYFPvHWKvE+jVo 03QlTJd6QR6IFROQOQrIXm2GVmtEQc+9ck/gwB27bWsWML3gwEOCCwM3tFM/79oRewAb2NIH ZQxbtdBSwJajBtxfbSs2GoSwpq3xovlMuMIR6PqUnla55+C8Oyw51u/SHolv7QDuRH2rh5Vo 3MrBNtIdiocejZMvtkSDZq09u8bQs0xhHPcpO9RruWxrPT6FbQIttOzXL6G8d+r7/sOgnvek nB/02cAR/8zLIgdPaV+pdW2j9U6tQxu3XeZJe0aLxnUtC6/GQEywGgXQwWey/D49BhNjTCb5 sfcNP/7m+bgMfUIT8GVi61VEtUsszrm7wlIOEVYUxXoRRfDvbQHaDsKKosEWKMoJ9G4dC/i/ LYgyivJ+axje1mBhF2IclQ9aV6rQRLbli+COjWQsPukx2xbOvpJqRVAWZJ8CzJo1tJScFQzM oHeSOgHGMTn+CMKPk9ZsDZouiJWNwQchWHKmKjKFPfRuJIbmfAZ7xiWC9/8occ31HGdS659d hTa2X9+q7AYvPQfchm82N4W3Bq0IDKgVtH6Ri1UC56b6wYVmI273pJgCUD9CYlVp8ieOUgMu dbsJtOpYG53FM2SK9xMhW8rNfaPv0ley5M2vM9jfqwU4jF8goiwgdE1gjqvfcqk21u0Ugi4A 7pTx2Bm+uP0qfnZ2hw9xI1Cn1gOB5uqWJNI1POEr7rCvjvDXVx0O65cC6/RY2K52U+XxhK0P rYEPB8p4wRUc52qokGM5MRxfXL18cfHCBR/tm7lwPe+BQOvCo99+ooxbe6fe7zeOlJ4nNsm+ I5DAXyPB/qrUVbkCSt5DkA5m7gnl1wA2vXfl92cP7U8cw6XpQgvndlS5NhwK9oqnoZse5For YnCx8tfy8kMBGrlxRc/yhVevflC33T2deq1C0+1xB45wy21l52AEdrX5twAB4LsvmsJH76HO dF2+9HvhDxH4JwOwQfpx1nG/F/6mDDNniObJ/iMPDjxVivl9Ur/98j7UxFanB7nPKKXVyg7i X66W/hO4lr/zIqBG+uthEd512zjbJZBcfanav3MB7sgYf4+YS2evyUWBvXZWGsOTNnJb59ts Br5AuUNaHqu7JPRXq3fHmuv6/R+wgzR5M/HNquzUeMNcT4+tIndBz8vDMSKpsIBAP4BZFClE L3TDChmh+6UoSr1J2Oo3O3iDQzTmBSTbAF8v/61/U9P8+Gd5dFS/zDk6siexlaqWuNzPZh5e VnVFAISHFB6m7q2MHdTyBsAbGFZjAqeciseQn1hohIP52VZvVqDioe2oLy/AhoQPlOr9hvca /wcfVKPOkz8AAA== --5ar6mh6slwql53zk--