Timer resolution is important if you have precise timing requirements which is typical for real-time applications. The default timer in Linux is a clock that runs at a configurable rate, typically 100 Hz for embedded systems and 250 Hz for servers and desktops. The interval between two timer ticks is known as a jiffy and, in the examples given above, is 10 milliseconds on an embedded SoC and four milliseconds on a server.
Linux gained more accurate timers from the real-time kernel project in version 2.6.18 and now they are available on all platforms, providing that there is a high resolution timer source and device driver for it – which is almost always the case. You need to configure the kernel with CONFIG_HIGH_RES_TIMERS=y
.
With this enabled, all the kernel and user space clocks will be accurate down to the granularity of the underlying hardware. Finding the actual clock granularity is difficult. The obvious answer is the value provided by clock_getres(2)
but that always claims a resolution of one nanosecond. The cyclictest
tool that I will describe later has an option to analyze the times reported by the clock to guess the resolution:
# cyclictest -R # /dev/cpu_dma_latency set to 0us WARN: reported clock resolution: 1 nsec WARN: measured clock resolution approximately: 708 nsec You can also look at the kernel log messages for strings like this: # dmesg | grep clock OMAP clockevent source: timer2 at 24000000 Hz sched_clock: 32 bits at 24MHz, resolution 41ns, wraps every 178956969942ns OMAP clocksource: timer1 at 24000000 Hz Switched to clocksource timer1
The two methods give rather different numbers, for which I have no good explanation but, since both are below one microsecond, I am happy.
3.15.17.154