Support Center > Search Results > SecureKnowledge Details
Multi-Queue Management for Check Point Security Gateway Technical Level
Solution

Important: This sk is relevant to R80.30 3.10, not R80.30.  

Table of Contents:

  • Introduction to Multi-Queue
  • What's New
  • Clish Multi-Queue Configuration
  • Clish Multi-Queue Configuration Examples
  • Notes

Introduction to Multi-Queue

The Multi-Queue approach scales network performance as the number of CPUs increases by allowing them to transfer packets through more than one network queue at a time.

Today's high-end servers have more processors and more CPU cores. In single-queue networking, the scale of the protocol stack is restricted, as the network performance does not scale when the number of CPUs increases. Network interfaces cannot transmit or retrieve packets in parallel because each interface has only one TX and RX queue.

Multi-Queue support removes these bottlenecks by allowing paralleled packet processing.

Multi-queue highlights

  • Multi-Queue is now fully automated: 
    • Multi-Queue is enabled by default on all supported interfaces.
    • The number of queues on each interface is determined automatically, based on the number of available CPUs (SNDs) and the NIC/driver limitations.
    • Queues are automatically affined to the SND cores.
  • Multi-Queue configuration does not require a reboot in order to be applied.
  • Multi-Queue is now managed by the Clish command line.
  • Multi-Queue is now managed by the Out-of-the-box experience performance tool.

Multi-queue drivers and limitations

Multi-queue is supported on the following drivers
Driver Max Speed (Gbps) Description Maximal Number of RX Queues
igb 1 Intel® Network Adapter Driver for PCIe 1 Gigabit Ethernet Network 2-16 (depends on the interface)
ixgbe 10 Intel® Network Adapter Driver for PCIe 10 Gigabit Ethernet Network 16
i40e 40 Intel® Network Adapter Driver for PCIe 40 Gigabit Ethernet Network 64
i40evf 40 Intel® i40e driver for Virtual Function Network Devices 4
mlx5_core 40 Mellanox® ConnectX® mlx5 core driver 60
ena 20 Elastic Network Adapter in Amazon® EC2 Configured automatically
virtio_net 10 VirtIO paravirtualized device driver from KVM® Configured automatically
vmxnet3 10 VMXNET Generation 3 driver from VMware® Configured automatically


Clish Multi-Queue Configuration

Clish Multi-Queue commands show and configure Multi-Queue on the supported interfaces.

Syntax

To show the existing Multi-Queue configuration: # show interface <if name> multi-queue [verbose]

To configure Multi-Queue: 'set interface <if name> multi-queue {off | auto | manual core <IDs of CPU cores>}'

Parameters

Parameter Description
{off | auto | manual}

Multi-Queue operational modes:

  • Off: Multi-Queue is disabled
  • Auto: Multi-Queue is automatically configured using the maximum number of SNDs. Interface queues and IRQ affinity are configured accordingly.
  • Manual: Multi-Queue is configured manually, according to the user. The user decides which CPU cores will be used for Multi-Queue. Interface queues and IRQ affinity are configured accordingly.
core

One or more CPU core numbers on which to apply the command. The list delimiter is a comma.

  • This is relevant for 'manual' mode only.
  • Note: Whitespaces are prohibited. Examples: "core 0,1,4,5", "core 24".
verbose

Verbosity, shows additional information:

  • IRQ numbers
  • RX/TX queue counter


Multi-Queue Configuration Usage Examples

Show / Hide this section

Note: Setting the interface Multi-Queue configuration may cause a temporary packet loss due to the reset of the network interface.

Set automatic Multi-Queue mode on interface eth2:

set interface eth2 multi-queue auto 

Set manual Multi-Queue mode on eth1 and eth2. Use 6 CPU cores 0, 1, 2, 4, 5, 6 (no whitespaces):

set interface eth2 multi-queue manual core 0,1,2,4,5,6

Show interface eth2 Multi-Queue configuration:

show interface eth2 multi-queue
Total 8 cores. Multiqueue 2 cores
i/f type state config cores
--------------------------------------------------------------------------
eth2 igb Up Auto 4,0
Note: The output does not include network interfaces that are currently in the down state.

Show the current Multi-Queue configuration, including IRQ numbers:

show interface eth2 multi-queue verbose
Total 8 cores. Multiqueue 2 cores: 0,4
i/f type state config cores
--------------------------------------------------------------------------
eth2 igb Up Auto 4(62),0(79)
core            interfaces      queue                irq        rx packets      tx packets
-------------------------------------------------------------------------------------------
0 eth2 eth2-TxRx-1 79 212 80
4 eth2 eth2-TxRx-0 62 16232 18901

Notes

  • NIC reset during Multi-Queue configuration: When a new Multi-Queue configuration is applied, the NIC is reset and there is a momentary loss of packets.
  • New NIC installed/enabled: When a new NIC is installed or enabled, it needs to be configured using the above commands
  • Note for VSX users: In a VSX environment, the Multi-Queue configuration is scoped to the VS on which the tool is being executed. As a result, only the interfaces specific to the VS can be configured/showed. In order to manage interfaces from a particular VS, please use the 'set virtual-system' command to switch between VSs.
    • In VSX, when all vlans are on a bond and the bond is shared among the VSs, multi-queue should be configured in VS0.
  • Cluster users: Multi-queue should be configured similarly on all cluster members
  • Not for Out-of-the-Box users: when the out of the box performance tool is on, multi queue cannot be manually configured. specifically, the number of multi queue queues cannot be changed. changes to cores affinity will cause the Out-of-the-Box tool to stop it's work. 

Give us Feedback
Please rate this document
[1=Worst,5=Best]
Comment