Support Center > Search Results > SecureKnowledge Details
Best Practices - Performance Optimization of Security Management Server installed on VMware ESX Virtual Machine Technical Level

The following configurations are highly encouraged to optimize Check Point Security Management Server installed on VMware Virtual Machine:

Virtual Machine Guest Operating System

Optimal virtual hardware presentation can be achieved when manually building a Virtual Machine running Check Point software by defining the Guest operating system (Guest OS Version) as :

Check Point operating systems running:

Kernel Version and details Check Point Release


RedHat Enterprise Linux version 5 (64-bit)
R80.30 Security Gateway (Gateway and Standalone)
R80.20 T101 Security Gateway (Gateway and Standalone)
R80.10 (not Smart-1 525, 5050, and 5150)
All pre-R80.10 releases


RedHat Enterprise Linux version 7 (64-bit)
R81.10 Management
R81 Management  
R80.40 Management
R80.30 Management
R80.20 T101 Management 
R80.20.M1, R80.20.M2
R80.10 Smart-1 525, 5050, and 5150


  • Upgrading to a later Check Point OS version that uses a higher level version of the kernel does not require a change to the Guest OS Version.
  • Using a higher level Guest OS Version than indicated by the respective RedHat Enterprise Linux version based on installed kernel versions is not recommended due to changes in how the Guest OS handles the hardware baseline, which can lead to strange results and require a complete rebuild.

To determine the kernel version of the installed Check Point OS use the uname -r command.

Utilization of other VMware Guest OS Versions may not provide the optimal results, because Check Point OS is very close to the implementation of RedHat Enterprise Linux.
Future Check Point OS kernel versions may provide support for higher levels of Guest OS Versions.


Disk Controller (SCSI Controller)

Utilization of the LSI Logic SAS or LSI Logic Parallel adapters is recommended.  VMware specific controllers, like the Paravirtual Controller are not supported or seen by the operating system.



Always use Thick provisioning (thick/lazy is acceptable), never Thin-provision disk resources.

Thick provisioning allocates the entire virtual disk prior to use and is fully zeroed out, whereas Thin provisioning virtual disks are neither pre-allocated or pre-zeroed. When a write is issued by a guest to a thin provisioned disk to an unallocated segment it must wait for disk to grow and be zeroed.
For the Security Management which is performing lots of writes such as logs and policy installations, which are critical components of the infrastructure, one would not want IO performance bottlenecks.

Make sure the disk partitions within the guest are aligned.

Unaligned or misaligned partitioning results in the I/O crossing a track boundary, resulting in additional I/O. This incurs a penalty on both latency and throughput. The additional I/O can impact system resources significantly on some host types and some applications - especially disk-intensive applications, such as SmartEvent or heavily loaded logging modules. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement.

For more information and remediation, refer to the documentation of the SAN provider.



Allocate at least 6 GB of memory to the Virtual Machine. For Virtual Machines running Multi-Domain Security Management Server, plan to allocate 6 GB for the base installation plus 2 GB for each additional Domain. Consider reserving 50% of the memory allocated and consider increasing the Virtual Machine's resource shares allocation.



In multi-CPU (SMP) guests, the guest operating system can migrate processes from one vCPU to another. This migration can incur a small CPU overhead. If the migration is very frequent, it might be helpful to pin guest threads or processes to specific vCPUs. Allocate only the number of vCPUs as is necessary.

For heavily-subscribed environments, consider reserving at least 30% of the CPU frequency and consider increasing the CM's resource shares allocation.


Virtual Network Adapter

The default virtual network adapter emulated in a Virtual Machine is either an AMD PCnet32 device (vlance / "Flexible"), or an Intel E1000 device (E1000). Using "Flexible" NIC driver in SecurePlatform OS / Gaia OS is not recommended, as it has been shown to carry a significant performance penalty.
VMware also offers the VMXNET family of paravirtualized network adapters. The VMXNET family contains VMXNET, Enhanced VMXNET (available since ESX/ESXi 3.5), and VMXNET Generation 3 (VMXNET3; available since ESX/ESXi 4.0).

Check Point recommends the use of VMXNET Generation 3 (VMXNET3) on all supported Gaia releases.
On Gaia release that do not support VMXNET3, E1000 is the recommended choice.

VMXNET3 is included in Security Gateway VE OVF R77 and higher.
ISO version of Security Gateway VE support VMXNET3 on R77.30 and higher using Gaia OS.

For more information on the configuration and usage of VMXNET3 network adapters, refer to sk110686 - How to configure VMXNET3 Network Adapters.

In some cases, low receive throughput in a Virtual Machine can be caused by insufficient receive buffers in the receiver network device. If the receive ring in the guest operating system's network driver overflows, packets will be dropped in the VMkernel, degrading network throughput. A possible workaround is to increase the number of receive buffers, though this might increase the host physical CPU workload. For VMXNET3 and E1000, the default number of receive and transmit buffers are controlled by the guest driver, with the maximum possible for both being 4096.


I/O Scheduling

As of the Linux 2.6 kernel, the default I/O Scheduler is Completely Fair Queuing (CFQ). Testing has shown that NOOP or Deadline perform better for virtualized Linux guests. ESX uses an asynchronous intelligent I/O scheduler, and for this reason virtual guests should see improved performance by allowing ESX to handle I/O scheduling. (This is relevant also for Gaia 3.10)

This change can be implemented in a few different methods.

  • The scheduler can be set for each hard disk unit. To check which scheduler is being used for particular drive, run this command:
    # cat /sys/block/<DISK_DEVICE>/queue/scheduler

    For example, to check the current I/O scheduler for disk sda:

    # cat /sys/block/sda/queue/scheduler
    noop anticipatory deadline [cfq]

    In this example, the sda drive scheduler is set to CFQ.

  • To change the scheduler on a running system, run this command (substitute the SCHEDULER_TYPE with relevant scheduler):

    # echo SCHEDULER_TYPE > /sys/block/<DISK_DEVICE>/queue/scheduler

    For example, to set the I/O scheduler for disk sda to NOOP:
    # echo noop > /sys/block/sda/queue/scheduler

  • Checking the new I/O setting:
    # cat /sys/block/sda/queue/scheduler
    [noop] anticipatory deadline cfq

Note: This command will not change the scheduler permanently. The scheduler will be reset to the default on reboot. To make the system use a specific scheduler by default, add an elevator parameter to the default kernel entry in the GRUB boot loader /boot/grub/grub.conf file.

For example, to make NOOP the default scheduler for the system, the /boot/grub/grub.conf kernel entry would look like this:

title Start in normal mode
	root (hd0,0)
	kernel /vmlinuz ro  vmalloc=256M noht root=/dev/vg_splat/lv_current elevator=noop panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
	initrd /initrd

With the elevator parameter in place, the system will set the I/O scheduler to the one specified on every boot.


Disk Queue Depth

Changing the disks queue_depth increases the amount of Disk I/O throughput.

  • To check the current setting, run:

    # cat /sys/block/<DISK_DEVICE>/device/queue_depth

    For example:

    # cat /sys/block/sda/device/queue_depth


  • For each disk presented to the Virtual Machine, change the queue depth as follows:

    # echo "[new_value]" > /sys/block/<DISK_DEVICE>/device/queue_depth

    For example:

    # echo "64" > /sys/block/sda/device/queue_depth

I/O request queue

The I/O request queue is the size of the request - in and out. With the deadline scheduler this should be set to twice the size of the queue_depth. This provided the most improved performance of the size settings for nr_requests.

  • To check the current setting: # cat /sys/block/<DISK_DEVICE>/queue/nr_requests

    For example:
    # cat /sys/block/sda/queue/nr_requests

  • For each disk presented to the Virtual Machine, change the queue depth as follows:
    # echo "[new_value]" > /sys/block/<DISK_DEVICE>/queue/nr_requests

    For example:
    # echo "128" > /sys/block/sda/queue/nr_requests


Related documentation and solutions:

Give us Feedback
Please rate this document