U-Net: A User-Level Network Interface for Parallel and Distributed Computing Cornell, 1995 Jonathan Ledlie CS 736 March 24, 2000 U-Net continues on a theme we have seen several times in the more recent papers of pulling traditionally kernel-level activities out of the kernel, while trying to keep the kernel, hardware, and legacy applications largely unscathed. The key problem the authors seek to solve is two-fold: keeping the network stack inside the kernel hampers the exploration of new protocols and, more importantly, it is slow and resource-costly. The kernel is slowed down mainly because its network buffers are limited (and must be shared among all applications) and it must incessantly copy data to and from user space. U-Net virtualizes the network interface, allowing each application to view it as its own. The applicaiton can then push whatever it wants out into the ether, only relying on the kernel for the initial connection. If it chooses to use the TCP/IP or UDP/IP U-Net library or Active Messaging (associating a handler with a message), so be it -- those libraries are available -- and if it chooses to create its own transport, it is free to do this too. As hinted at above, "Protection is assured through kernel control of channel set-up and tear-down." Another nice feature is that small packets go straight into the send/receive queues. Legacy applications still go through the kernel, utilizing an emulated endpoint -- costing one level of indirection (but they can still run). U-Net works by changing the device drivers for the specific hardware, which then multiplexes/demultiplexes messages back out to applications and the hardware. Adding a network interface to their collection of U-Net capable ones entails modifying the device driver (again, not a huge deal). A more serious problem is that an application's networking buffers need to be pinned in memory so that the NI can copy to and from them via DMA without kernel interference (at least in their true zero copy scheme). If too many applications do this, available memory could be severely reduced. The authors argue that this could be alieviated by having some applications start using the old kernel-buffer method.