Hello,
Apologies in advance if this is a newbie question. I am attempting to
write a real-time simulation of an application we have in house. I
have a dual processor SMP system, hyperthreading enabled, running
kernel 2.6.7.
The first thread begins at priority 1 (SCHED_RR) and subsequently
spawns another time critical task running at priority 2. The initial
thread uses setaffinity to set the desired cpu to 2. When the second
task begins, the migration thread becomes 30% active (as reported by
top) for the duration of its execution. When the priority 2 thread
terminates the first thread continues with the migration task
consuming only 2% of the CPU.
If there was any change, I was expecting that the higher priority of
the second thread would cause it to execute closer to 100% CPU. I
built a test code where each thread computes an identical dumb timing
loop. The priority 2 thread ends up executing 30% slower than the
priority 1 thread due to contention with the migration thread.
Is this the expected behavior, and if so could you please inform me
why? I had not anticipated the any attempt by the kernel to shift the
process to another CPU, since sched_setafinity had been applied.
I want to confirm that from xosview, both tasks do actually execute
serially on processor 2 as expected. Code is include below.
don
int main(argc, argv)
{
cpu_set_t cur_mask;
struct sched_param policy;
long ret;
u32 page_size;
pthread_attr_t attr;
pthread_t task_pid;
CPU_ZERO(&cur_mask);
CPU_SET(2, &cur_mask);
ret = sched_setaffinity(0, sizeof(cur_mask), &cur_mask);
policy.sched_priority = 1;
ret = sched_setparam(0, &policy);
ret = sched_setscheduler(0, SCHED_RR, &policy);
ret = pthread_attr_init(&attr);
ret = pthread_attr_setschedpolicy(&attr, SCHED_RR);
policy.sched_priority = 2;
ret = pthread_attr_setschedparam(&attr, &policy);
ret = pthread_attr_getstacksize(&attr, &stack_size);
ret = pthread_create(&task_pid, &attr, (void *(*)())
local_thread, (void *)NULL);
{
long i, j = 0;
fprintf(stderr, "enter %ld\n", j);
for(i = 0; i < 1000000000; i++){
j += sqrt((double)i);
}
fprintf(stderr, "end %ld\n", j);
}
policy.sched_priority = 0;
ret = sched_setparam(0, &policy);
ret = sched_setscheduler(0, SCHED_OTHER, &policy);
return 0;
}
int32 local_thread(void)
{
{
long i, j = 0;
fprintf(stderr, "thread %ld\n", j);
for(i = 0; i < 1000000000; i++){
j += sqrt((double)i);
}
fprintf(stderr, "thread %ld\n", j);
}
return 0;
}
don fisher writes:
> Hello,
>
> Apologies in advance if this is a newbie question. I am attempting to
> write a real-time simulation of an application we have in house. I
> have a dual processor SMP system, hyperthreading enabled, running
> kernel 2.6.7.
>
> The first thread begins at priority 1 (SCHED_RR) and subsequently
> spawns another time critical task running at priority 2. The initial
> thread uses setaffinity to set the desired cpu to 2. When the second
> task begins, the migration thread becomes 30% active (as reported by
> top) for the duration of its execution. When the priority 2 thread
> terminates the first thread continues with the migration task
> consuming only 2% of the CPU.
>
> If there was any change, I was expecting that the higher priority of
> the second thread would cause it to execute closer to 100% CPU. I
> built a test code where each thread computes an identical dumb timing
> loop. The priority 2 thread ends up executing 30% slower than the
> priority 1 thread due to contention with the migration thread.
>
> Is this the expected behavior, and if so could you please inform me
> why? I had not anticipated the any attempt by the kernel to shift the
> process to another CPU, since sched_setafinity had been applied.
>
> I want to confirm that from xosview, both tasks do actually execute
> serially on processor 2 as expected. Code is include below.
Yes this is a problem with the balancing code in SMP. It will continually
keep trying to balance the now unbalanced cpus (2 tasks on one cpu and the
other cpu idle) despite the fact that the affinity is set for those 2 tasks
to run on the other cpu. The balancing code is not smart enough (yet?) to
recognise that there is no point trying to balance processes that you don't
want balanced.
Con
don fisher wrote:
> Hello,
>
> Apologies in advance if this is a newbie question. I am attempting to
> write a real-time simulation of an application we have in house. I have
> a dual processor SMP system, hyperthreading enabled, running kernel 2.6.7.
>
> The first thread begins at priority 1 (SCHED_RR) and subsequently spawns
> another time critical task running at priority 2. The initial thread
> uses setaffinity to set the desired cpu to 2. When the second task
> begins, the migration thread becomes 30% active (as reported by top) for
> the duration of its execution. When the priority 2 thread terminates the
> first thread continues with the migration task consuming only 2% of the
> CPU.
>
> If there was any change, I was expecting that the higher priority of the
> second thread would cause it to execute closer to 100% CPU. I built a
> test code where each thread computes an identical dumb timing loop. The
> priority 2 thread ends up executing 30% slower than the priority 1
> thread due to contention with the migration thread.
>
> Is this the expected behavior, and if so could you please inform me why?
> I had not anticipated the any attempt by the kernel to shift the process
> to another CPU, since sched_setafinity had been applied.
>
OK I'll have to put something in that doesn't class the balancing
attempt as a failure if it encounters tasks that aren't allowed to
be moved.