QuickPath Interconnect: Considerations in Packet Processing Applications

An Industry Whitepaper

An issue of critical importance to stateful packet-processing applications, including deep-packet inspection (DPI) and network policy control, is Intel’s QuickPath Interconnect architecture.

In an architecture update, Intel introduced a separate, integrated memory controller and high performance memory cache for each processor in a multiprocessor architecture. With this design, whenever closely-related tasks are assigned to different processors, processors rely on an interconnection of quick paths (the QuickPath Interconnect) to access each other’s caches and memory for the related information – an architecture called NUMA.

However, when QPI memory checks between processors occur frequently, as they do in common network policy control applications, processing throughput suffers immensely. Many packet-processing solutions are vulnerable to massive performance degradation as a result of QPI memory checks, and only by understanding the issue can an operator properly evaluate competing alternatives.

To maximize packet-processing performance in multiprocessor environments, QPI memory checks must be kept to a minimum.

In the worlds of policy control and packet-processing, the only way to completely avoid QPI memory checks is to maintain processor affinity by ensuring all packets associated with a flow, session, and subscriber are processed by the same CPU.

While there are architectural accommodations that can slightly decrease the number of QPI memory checks, only a solution that steers packets to a particular core (by flow, session, and subscriber) will maintain complete affinity.

UPDATED : 2017-04-17 09:44:04