Maximizing Performance with Core and Processor Affinity

A Sandvine Technology Showcase

Stateful devices perform multiple memory operations per packet (e.g., flow-state lookup, signature analysis, stateful modifications, counters, etc.), and these memory operations depend on previous packets. These memory operations in turn deal with a specific memory controller, location, and cache. If all processors access the same memory location, a severe bottleneck is created due to cache pollution and latency/interlock on the interconnect bus.

When designing a stateful packet processing deployment, device performance of Intel®-based chips can vary wildly due to the impact of QPI (QuickPath Interconnect) memory checks and local cache pollution.

To maximize performance, a policy control solution must maintain core affinity (this will guarantee processor affinity) – only in this way can policy control be applied in a “shared nothing” architecture and avoid latency-introducing memory references (whether across the QPI or to a local cache).

To ensure core affinity, Sandvine’s network processing unit (NPU) acts as the first point of examination for incoming packets within our Policy Traffic Switch (PTS) and ensures that all packets relating to a particular flow, session, and subscriber are presented in order and symmetrically to only one processing core.

UPDATED : 2017-04-17 09:49:57