\reviewtitle{Measuring and Characterizing System Behavior Using Kernel-Level Event Logging} \reviewlabel{yaghmour00linux-trace} \reviewauthor{Karim Yaghmour and Michel R. Dagenais} This casually-written paper describes the Linux Trace Toolkit: ``Basically, events are forwarded to the trace module via the kernel trace facility. The trace module, visible in user space as an entry in the /dev directory, then logs the events in its buffer. Finally, the trace daemon reads from the trace module device and commits the recorded events into a user-provided file.'' So, three parts: (1) the trace module which registers itself with the built-in kernel trace facility, (2) the trace module, which stores incoming event descriptions and delivers them to the trace daemon; it performs timestamping (3) a daemon, which retrieves information from the trace module and stores it in a file. Data analysis is performed offline. The trace module can be configured on the fly with ioctls. Q1. Pretty good related work section. In it he suggests that we would be able to track the ordering of events with this. Is this true? Even in the case of multiple processors? Q2. Why does the trace module need to use double buffering? Q3. The trace module needs to be reentrant in case it is interrupted and another event occurs. Doesn't this hurt ordering of events? Q4. In 5.4, he argues that LTT would make tracking down synchronization problems easy? Do we believe this?