Software must negotiate between these two extremes to determine the optimal configuration for the expected workload. At the same time, excessive interrupt delays might lead to resource starvation and overrun conditions. Larger interrupt delays are desirable in these situations, to minimize interrupt-processing overhead and improve efficiency. Every extra CPU or bus cycle spent on interrupt-processing overhead is one less cycle available for processing the actual packet data. When network use is high, the system must operate as efficiently as possible. In these cases, shorter interrupt delays are desirable to minimize the latency on each packet. When network use is low, delayed interrupts are unlikely to improve performance since the rate of packet transmission or reception is relatively infrequent. As a result, determining optimal interrupt moderation settings usually involves a trade-off between latency and efficiency. By delaying the delivery of an interrupt, the GbE controller is also delaying the delivery of packet information. Although interrupt moderation increases the overall interrupt processing efficiency, it also increases the average latency incurred by each packet.
0 Comments
Leave a Reply. |