Kernel configuration

Typical configuration issues

x86 processor type

Pick the exact processor your x86 platform is sporting, at the very least the most specific model which is close to the target CPU, not a generic placeholder. For instance generic i586 implies that no TSC feature is available from the CPU, which in turn would require Xenomai to emulate a time stamp counter, leading to suboptimal performances.

Xenomai 3 requires the hardware TSC feature from x86 CPUs.

If you have to run Xenomai 2 on some low-end x86 platform lacking a hardware TSC with a legacy kernel predating 3.2, then make sure to disable CONFIG_INPUT_PCSPKR. With any later version, the appropriate settings are applied automatically.

Power management

You should not disable power management globally, the only configuration options which should be disabled in this area are:



In particular, keeping the ACPI enabled bears no risks of high latencies, whereas disabling it may prevent your system from booting correctly.


In the same vein, do not disable USB, but enable support for your USB host controller(s), as the host controller code, when starting, disables support for “legacy USB emulation”, which is a source of high latencies.

CPU Frequency scaling

Disable CONFIG_CPU_FREQ, this option creates issues with Xenomai timing code, as well as unpredictable run-time for your real-time threads, and possibly high latencies when CPU frequency is changed dynamically.

Timing should be correct with recent processors having the constant_tsc flag though.

However, the ondemand governor cannot track Xenomai threads running in primary mode, i.e. under the control of the co-kernel. Such threads look suspended from its point of view, although they may be consuming CPU. Therefore, the governor won’t switch the CPU to a higher frequency although the real-time activity would justify it.

Configuration Options

This allows the CPU frequency to be modulated with workload, but many CPUs change the TSC counting frequency also, which makes it useless for accurate timing when the CPU clock can change. Also some CPUs can take several milliseconds to ramp up to full speed.

Allows the CPU to enter deep sleep states, increasing the time it takes to get out of these sleep states, hence the latency of an idle system. Also, on some CPU, entering these deep sleep states causes the timers used by Xenomai to stop functioning.

This option may be enabled on x86_64 only with Xenomai 2, x86_64 and x86_32 indifferently with Xenomai 3.

CONFIG_APM - Disable
The APM model assigns power management control to the BIOS, and BIOS code is never written with best latency in mind. If configured, APM routines are invoked with SMI priority, which bypasses the interrupt pipeline entirely. Resorting to the Xenomai workaround for SMI won’t help here.

For systems with ACPI support in the BIOS, this ACPI sub-option installs an idle handler that uses ACPI C2 and C3 processor states to save power. The CPU must warm-up from these sleep states, increasing latency in ways dependent upon both the BIOS’s ACPI tables and code. You may be able to suppress the sleeping with idle=poll boot-arg, test to find out. With recents versions of Linux (probably starting around Linux 2.6.21), the acpi processor module disables the local APIC when loaded. This will cause Xenomai timer initialization to fail. This makes a second reason for disabling this option.

Just like CONFIG_ACPI_PROCESSOR, this idle driver sends the CPU into deep C states, also causing huge latencies because the APIC timer that Xenomai uses may not fire anymore.


This option may be enabled, provided the following operations on interrupt lines are always done from a mere Linux context, aka secondary mode, and never from the real-time mode:

  • hooking/requesting

  • releasing

  • enabling/unmasking

  • disabling/masking

Practically, the requirement above translates into calling rtdm_irq_request(), rtdm_irq_free(), rtdm_irq_enable(), rtdm_irq_disable() exclusively from a non-rt handler in any RTDM driver. This includes the →open(), →close() and →ioctl_nrt() handlers.

Booting with or without an initramfs?

By default, distributions kernels boot using an initramfs. This means that the kernel needs no particular built-in support, anything can be built as a module. These modules are loaded in a first boot stage, from the initramfs, then the final root filesystem may be mounted and the kernel boot may continue normally.

So, if you start from a distribution kernel, you will have to generate an initramfs containing the modules supporting the basic hardware needed to mount root filesystem. Eache distribution provides its way to generate an initramfs for a new kernel.

If, on the other hand, you want to boot without an initramfs, you will have to build inside the kernel, and not as a module, all the components necessary for mounting your root filesystem. This means: the disk controller support, the support for SCSI disks if this controller is an SCSI controller or is supported by a new style ATA controller, the support for IDE disks if this controller is supported by an old-style Parallel ATA controller, the support for your NIC if booting through network, and the support for the filesystem used by your root filesystem (e.g. EXT3, NFS, etc…).

There is one thing for sure, however: you can not boot a kernel where everything is built as a module, without an initramfs.

Preposterous latency figures

If running the latency test shows definitely weird latency figures (i.e. hundreds of micro-seconds and more), the usual suspect on x86 is the System Management Interrupt .

Make sure to read the recommendations about tweaking the SMI configuration VERY CAREFULLY, otherwise you may damage your hardware.

Optimized CPU and platform settings

By tuning your kernel settings close to the real hardware, you can avoid useless overhead (e.g. CONFIG_SMP on non-SMP systems) or suboptimal code generation.

For instance, picking CONFIG_M586 or earlier requires Xenomai to emulate the on-chip TSC for timing duties, even if your CPU actually provides this special hardware counter; unfortunately, emulating a TSC is slow, and this clearly has a negative impact on the worst-case latency figures, albeit your hardware may perform much better with proper kernel configuration.

Therefore, rule #1 on x86 used as a real-time platform, is not to trust blindly the configuration of the should-work-everywhere default kernel shipped by your favorite distro-maker, but rather adapt this configuration to your real hardware.