Presented at UW/UC Berkeley/NYU Cloud Day. Abstract: In 1989, a number of U.S. research institutions and universities started collaborating on a set of Gigabit testbeds - trying to build the first networks that could deliver data to and from applications at the seemingly crazy speed of 1Gbps. As part of the Aurora testbed, we built a number of flexible “host-network interfaces” - flexible because we didn’t know what tasks would be done in the host and which should be offloaded. Our 1989 design - a couple of Intel CPUs, some big FPGAs, expensive optics - bore a striking similarity to the smartNICs of today. And in many ways we still don’t know which tasks should be offloaded, which is why we continue to see CPUs and FPGAs on NICs - although some tasks like TCP header processing & tunneling for network virtualization are now well established as offloadable. This talk examines the long-lived tradeoff between keeping network functions close to the application (in the host) and offloading them to the NIC in the hope of better performance, and considers some of the implications of recent announcements of running a full hypervisor on the smartNIC.