Here's the way I sometimes test ram: first mount two tmpfs (by default tmpfs is half the ram):
# mount -t tmpfs /mnt/test1 /mnt/test1
# mount -t tmpfs /mnt/test2 /mnt/test2
Check free memory and free space:
# free
total used free shared buffers cached
Mem: 252076 234760 17316 0 75856 62328
-/+ buffers/cache: 96576 155500
Swap: 1048820 332 1048488
# df -h -t tmpfs
Sys. de fich. Tail. Occ. Disp. %Occ. Monté sur
tmpfs 124M 0 124M 0% /lib/init/rw
udev 10M 104K 9,9M 2% /dev
tmpfs 124M 0 124M 0% /dev/shm
/mnt/test1 124M 0 124M 0% /mnt/test1
/mnt/test2 124M 0 124M 0% /mnt/test2
Now fill the tmpfs with dd:
# dd if=/dev/zero of=/mnt/test1/test bs=1M
dd: écriture de `/mnt/test1/test': Aucun espace disponible sur le périphérique
123+0 enregistrements lus
122+0 enregistrements écrits
128802816 octets (129 MB) copiés, 1,81943 seconde, 70,8 MB/s
# dd if=/dev/zero of=/mnt/test2/test bs=1M
dd: écriture de `/mnt/test2/test': Aucun espace disponible sur le périphérique
123+0 enregistrements lus
122+0 enregistrements écrits
128802816 octets (129 MB) copiés, 5,78563 seconde, 22,3 MB/s
You can check that your memory is actually quite full:
# free
total used free shared buffers cached
Mem: 252076 248824 3252 0 1156 226380
-/+ buffers/cache: 21288 230788
Swap: 1048820 50020 998800
Now you may run various tests, for instance check that both temp files are identical, directly or running md5sum, sha1sum, etc:
# time cmp /mnt/test1/test /mnt/test2/test
real 0m4.328s
user 0m0.041s
sys 0m1.117s
About temperature monitoring, I know only of lm-sensors. I don't know if it manages your particular hardware, but you probably could give it a try anyway.
From the cpusets documentation:
Calls to sched_setaffinity are filtered to just those CPUs allowed
in that task's cpuset.
This implies that CPU affinity masks are intersected with the cpus in the cgroup that the process is a member of.
E.g. If the affinity mask of a process includes cores {0, 1, 3} and the process is running on the system cgroup, which is restricted to cores {1, 2}, then the process would be forced to run on core 1.
I'm 99% certain that the htop
output is "wrong" to the fact that the processes have not woken up since the cgroups were created, and the display is showing the last core the process ran on.
If I start vim before making my shield, vim forks twice (for some reason), and the deepest child is running on core 2. If I then make the shield, then sleep vim (ctrl+z) and wake it, both processes have moved to core 0. I think this confirms the hypothesis that htop
is showing stale information.
You can also inspect /proc/<pid>/status
and look at the cpus_allowed_*
fields.
E.g. I have a console-kit-daemon
process (pid 857) here showing in htop as running on core 3, but in /proc/857/status
:
Cpus_allowed: 1
Cpus_allowed_list: 0
I think this is saying that the affinity mask is 0x1, which allows running on only core 1 due to the cgroups: i.e. intersect({0,1,2,3}, {0}) = {0}.
If I can, I'll leave the question open a while to see if any better answer comes up.
Thanks to @davmac for helping with this (on irc).
Best Answer
Yes, it's possible.
taskset
is an operating system level function: it doesn't matter what CPU architecture you're using (within reason), you're just telling the kernel where it can and can't run things.The differences between ARM and x86 are largely immaterial here, so I won't get into them in any level of detail (though you should be aware of them and their implications for your particular use case -- this is a research project for you in your spare time).
No, you should not do this.
Do not try to outsmart the scheduler.
The scheduler is smarter than you think it is.
In most cases, the scheduler is smarter than you are.
The scheduler was designed by people who fully understand the operating system's internals. These people have lots of real-world experience optimizing systems, and they've poured that experience (along with no small amount of blood, sweat, tears, and profanity) into your operating system's scheduler algorithm.
As David said, using
taskset
or other tools to manually force affinity (or any other scheduler settings) may make sense if and only if you know for a fact that your situation is "special" and the scheduler will do the Wrong Thing if left to its own devices.These are surgical tools designed for very specific, very narrow sets of circumstances.
Using the tools incorrectly will wind up
killing peoplehurting performance.