help you fix your problem How is the microsecond resolution/granularity of gettimeofday() accomplished? Linux runs on many different hardware platforms, so the specifics differ. On a modern x86 platform Linux uses the Time Stamp Counter, also known as the TSC, which is driven by multiple of a crystal oscillator running at 133.33 MHz. The crystal oscillator provides a reference clock to the processor, and the processor multiplies it by some multiple - for example on a 2.93 GHz processor the multiple is 22. The TSC historically was an unreliable source of time because implementations would stop the counter when the processor went to sleep, or because the multiple wasn't constant as the processor shifted multipliers to change performance states or throttle down when it got hot. Modern x86 processors provide a TSC that is constant, invariant, and non-stop. On these processors the TSC is an excellent high resolution clock and the Linux kernel determines an initial approximate frequency at boot time. The TSC provides microsecond resolution for the gettimeofday() system call and nanosecond resolution for the clock_gettime() system call.
Measuring Time in Linux Kernel Space With Sub-Microsecond Precision
wish help you to fix your issue It's possible to shrink it down to ~600 KiB. Check the work done by Tom Zanussi from Intel. Presentation from Tom and Wiki page about the topic. UPDATE. Tom published interesting statistics about memory use by different subsystems in the kernel. He did research during that time when he was working on the project.
Embedded Linux Kernel and Desktop Linux Kernel Difference
wish helps you yes, there is one official kernel for different architectures at kernel.org There may be forks with special hardware handling, additional drivers, etc. for specific customers (hardware suppliers like Samsung)