Pre-Silicon Debugging of glibc and the Linux Kernel using Virtual Platforms
by Antonios Salios - February 12, 2024


MachineWare is a provider of powerful Instruction Set (ISSs) and Full System Simulators or Virtual Platforms (VPs) that enable thorough verification of embedded software stacks long before first physical prototypes are back from the fab. This so-called “shift-left” approach, enables our customers to have more stable, secure and efficient software earlier in the product design cycle ultimately shortening time-to-market. Our flagship RISC-V VP, SIM-V, simplifies getting started with this exciting new architecture, whether your project is 32 or 64-bit based. And, as this blog post will show, it can also be used to hunt down obscure bugs in the Linux kernel and the glibc C library.

To make using SIM-V as easy as possible, MachineWare provides several software starting points for commonly used target software such as RISC-V Android, Linux, Buildroot, and Zephyr. As SIM-V and its underlying ISS are constantly being improved and extended to support more RISC-V extensions, these software starting points are in need of regular updates, since (simulated) hardware is not very useful if the software cannot take advantage of all its new features. Sadly, sometimes upgrading to a new Buildroot version is easier said than done. This time there was one particular issue that stood out.

One of our target software starting points features an Xorg graphical environment that can be operated using a keyboard and mouse, as shown in Figure 1. However, in the new Buildroot version, the xf86-input-keyboard input driver was finally deprecated and no longer available. As a result, we had to switch to new drivers that now receive input data through the evdev input interface of the Linux kernel.

With minor issues resolved and Buildroot finally convinced to build the environment, the next logical step was to try it out. Everything appeared to be working properly, except for one big problem: the GUI did not respond to any user input from the mouse or keyboard on our 32-bit SIM-V platform rendering the graphical environment completely inoperable. However, the GUI was fully functional on the 64-bit version, indicating a potential bug. As the issue only occurs on 32-bit systems and not on 64-bit ones, it is likely that the problem lies somewhere in the architecture-specific target software stack and not in the input device simulation models.

Figure 1: Xorg desktop environment executing glxgears

(Background) The evdev Interface

But how does the Xorg driver get the data it needs from input devices? For that the Linux kernel provides the so-called evdev ("event device") interface [1]. It abstracts raw input data from devices such as keyboards and mice and makes it available to the userspace via character devices. In classic Unix fashion (“everything is a file”), these devices are available as files in the /dev/input/ directory. A userspace application can open and read these files to gather input events via the open() and read() syscalls. Further information about the input device, such as its capabilities, can be obtained via the ioctl() syscall. Usually these syscalls are not called directly by the application code, but by the C standard library that is used by the application. An overview of the architecture is shown in Figure 2.

Figure 2: Diagram of input event exchange between kernel and userspace

The evdev interface provides the event data in the form of an input_event structure, which is shown in Listing 1. This kernel structure is made available to userspace applications via the kernel headers. We can see that the userspace application receives a timestamp and some specific event data from the input device. This timestamp, and in particular the C macros surrounding it, will play a crucial role in solving this problem. They will lead to different behavior on 32-bit and 64-bit platforms.

struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) 
	struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
	__kernel_ulong_t __sec;
	__kernel_ulong_t __usec;
#define input_event_sec  __sec
#define input_event_usec __usec
	__u16 type;
	__u16 code;
	__s32 value;

Listing 1: Simplified excerpt of the input_event struct

(Background) The Year 2038 Problem & Fun With C Macros

Typically, Unix-like operating systems such as Linux measure time by counting the seconds since midnight on January 1, 1970 (known as the Unix epoch). This count is usually stored as a signed 32-bit integer, so the minimum representable date is 20:45:52 UTC on December 13, 1901, and the maximum representable date is 03:14:07 UTC on January 19, 2038. You may already have an idea where the problem lies. One second after the maximum date, the integer overflows and jumps back to December 1901. Applications will erroneously jump back a whole century, which will cause problems. This issue is known as the Y2038 problem [2].

One solution is to use a 64-bit variable instead of a 32-bit variable for storing timestamps. However, this may cause incompatible changes to the layout of structures for existing systems and applications breaking binary compatibility. For example, the timeval structure that stores seconds & microseconds time annotations would grow in size. Therefore, it is crucial to know the size of the variable used to store time.

The maintainers of the Linux kernel and the C standard libraries have agreed to introduce a macro (__USE_TIME_BITS64) that indicates the size of time variables [3]. But, as we will see, this agreement was subsequently broken by the most widely used C standard library glibc.

These incompatibility issues have also been noticed by the kernel maintainers of the input subsystem. As seen in Listing 1, the input_event struct uses a timeval struct to store timestamps. Here the kernel maintainers had to implement a workaround for the Y2038 problem that does not break the kernel ABI on 32-bit systems [4]. Instead of using 64-bit time_t values in a timeval struct, the timestamp variables get re-interpreted as unsigned 32-bit integers. This extends the timestamp range to 2106 without breaking existing 32-bit applications. The updated interpretation is then used whenever the __USE_TIME_BITS64 macro is set during compilation. On 64-bit systems this workaround is not needed, as the ABI already uses 64-bit timestamps.


The first step in troubleshooting input device problems is to analyze the problem with evtest. This utility prints the events generated by an evdev input device, allowing us to analyze the events that a userspace application would receive.

With this tool, we confirmed that keyboard and mouse were correctly identified. However, when a key is pressed on the keyboard, such as the letter 'a', the received events do not match the expected output. evtest fails to recognize the key code of the pressed key and receives a negative timestamp. This is shown in Figure 3. It appears that evtest cannot decode the data correctly from the kernel's evdev interface. At this point, we needed to further analyze the compilation process of an evdev application and also use a debugger to check the variables in the structure at runtime. Fortunately, it is very easy to debug guest user space applications in Linux with SIM-V. Simply start a GDB server in the VP and use your favorite debugger to connect to the debugging session. From there, debugging is as straightforward as if the debugee application would be running directly on the simulation host. As depicted in Figure 4, in this analysis Microsoft's Visual Studio Code was used for debugging evtest on the 32-bit RISC-V simulator.

Figure 3: Output of evtest when pressing the 'a' key

Figure 4: Debugging evtest with Visual Studio Code and SIM-V

Before further analyzing the structure and macros it uses, let's review some important facts about RISC-V. On RV32, a long type is defined as 4 bytes or 32 bits wide. On RV64, the same long type is 8 bytes or 64 bits wide. Therefore the macro __BITS_PER_LONG in the input_event struct is defined as 32 on RV32 and 64 on RV64. In addition, RV32 uses 64-bit time values by default, so it will be safe against the Y2038 bug.

Now when looking at the input_event struct we can see that the kernel uses __sec & __usec for the timestamps. These variables are of type __kernel_ulong_t which is 32 bits wide on RV32 and 64 bits wide on RV64.

On 64-bit RISC-V, if a userspace application that utilizes this structure is compiled, it will use a timeval struct instead of the variables. The #if preprocessor directive evaluates to true because __BITS_PER_LONG equals 64. The kernel and the userspace application both use variables of the same width.

On 32-bit systems the same result can be observed. In this case __BITS_PER_LONG is set to 32, but __USE_TIME_BITS64 is (wrongly at first glance) undefined. We will come to see why it is undefined in the next section. The timeval structure with the 64-bit time_t elements is used again, but now there is a discrepancy between kernel and userspace. The kernel uses 32-bit values but the userspace tries to read 64-bit wide values! The evdev application is receiving invalid data due to an out-of-bounds read on the stack. This is likely the reason why evtest is printing nonsensical data and the input drivers are not functioning properly. This problem could be caused by either a bug in the Linux kernel or the C standard library used by evdev, which is glibc in our Buildroot environments.

After consulting the linux-input and glibc mailing lists, it appears that the problem is due to miscommunication between the kernel and glibc maintainers. The kernel headers expect that the __USE_TIME_BITS64 macro to be defined when 64-bit time_t is used regardless of the target architecture and its default size of time_t. Otherwise they are unable to select the correct definition [5]. This agreement between the kernel and the C Standard library was broken by glibc.

Using this macro in the kernel headers is also not an ideal solution since now the kernel headers and internal glibc definitions are tied together. This will lead to a broken binary compatibility if glibc decides to change this part of the library in the future.

Unfortunately, this macro seems to be widely used regardless of its internal definition status. Other Linux headers, such as the asound.h header of the sound subsystem [3], use it to determine the size of time_t. Further discussion on the libc-alpha mailing list revealed that the C++ library libstdc++ also relies on this macro [6]. A quick search on GitHub also shows that it is used in many other projects.

Possible Solutions

There are two possible solutions to this problem. One option is to remove the macro from the input.h kernel header [3]. While this is a straightforward solution for the evdev interface, it may not be as simple for other projects and headers that utilize this macro.

Alternatively, the macro could always be set by the C library if 64-bit time_t is used. The alternative C library Musl already defines and uses the macro in this way [7]. This has the additional advantage of not breaking compatibility with already existing software. A patch has been submitted on the glibc alpha mailing list to address the issue we uncovered.

After applying the patch to our Buildroot flow, evtest now correctly recognizes the key presses and the Xorg GUI can be used with keyboard and mouse on our 32-bit RISC-V simulator.

Figure 5: Output of evtest when pressing the 'a' key now with the correct kernel header definition


In this blog post, we delved into an intriguing bug we encountered while upgrading the Buildroot-based target software starting points for SIM-V. The problem arose when switching from deprecated input drivers to new ones using the Linux kernel's evdev interface. Specifically, on RISC-V 32-bit platforms, the graphical user interface failed to respond to mouse and keyboard input, while the 64-bit version worked flawlessly.

The bug was traced back to the input_event structure of the kernel headers. Unix-like systems measure time using a signed 32-bit integer that will overflow in January 2038. The maintainers of the Linux kernel's evdev interface developed a workaround that addresses the problem without breaking binary compatibility for existing applications. However, a miscommunication between kernel and C standard library maintainers regarding the __USE_TIME_BITS64 macro caused a discrepancy in the interpretation of the timestamp variables between the kernel and userspace on 32-bit systems. Analyzing the problem with SIM-V revealed that applications using the evdev interface on 32-bit systems, such as evtest, were reading out-of-bounds data from the stack, resulting in broken input drivers for Xorg.

Two possible solutions were considered to fix the bug. One involved removing the __USE_TIME_BITS64 macro from kernel headers, but this still requires further investigation. The alternative solution, inspired by the approach taken in the musl C standard library, proposed having the macro set whenever using 64-bit time_t. A patch that has already been posted to the glibc alpha mailing list implements this solution. After applying the glibc patch, the bug was resolved, and evtest recognized keystrokes correctly. Consequently, the Xorg GUI became fully functional with keyboard and mouse input on our RISC-V 32-bit VP.

This bug underscores the importance of effective collaboration between kernel and C library maintainers and also highlights the importance of Virtual Prototyping tools such as SIM-V in identifying and fixing bugs in low-level target software. In the rapidly evolving landscape of embedded systems development, tools like SIM-V play a crucial role in ensuring the efficient integration of hardware and software components.