Support Center > Search Results > SecureKnowledge Details
Best Practices - Performance Optimization of Security Management Server installed on VMware ESX Virtual Machine
Solution

The following configurations are highly encouraged in order to optimize Check Point Security Management Server installed on VMware Virtual Machine:

Virtual Machine Guest Operating System

Optimal virtual hardware presentation can be achieved when manually building a Virtual Machine running Check Point software by defining the Guest operating system (Guest OS Version) as :

Check Point operating systems running :

  • Kernel versions 2.6 :  "RedHat Enterprise Linux version 5 (64-bit)"
  • Kernel versions 3.10 :  "RedHat Enterprise Linux version 7 (64-bit)"

 

Kernel Version Check Point Release

2.6

 

All releases prior to R80.10

R80.10 (not Smart-1 525, 5050, and 5150 ISO)

R80.20 T101 Gateway ISO (Gateway and Standalone)

 

3.10

 

R80.10 Smart-1 525, 5050, and 5150 ISO

R80.20.M1

R80.20 T101 Management ISO

R80.20 GoGo EA Gateway

 

 

 

Upgrading an installation to a later Check Point OS version that uses a higher level version of the kernel does not require a change to the Guest OS Version.

Using a higher level Guest OS Version than indicated by the respective RedHat Enterprise Linux version based on installed kernel versions is not recommended due to changes in how the Guest OS handles the hardware baseline, which can lead to strange results and require a complete rebuild.

To determine the kernel version of the installed Check Point OS use this command:

]# uname -r

Utilization of other VMware Guest OS Versions may not provide the optimal results, since Check Point OS is very close to the implementation of RedHat Enterprise Linux.

Future Check Point OS kernel versions may provide support for higher levels of Guest OS Versions.

 

Disk Controller (SCSI Controller)

Utilization of the LSI Logic SAS or LSI Logic Parallel adapters is recommended.  VMware specific controllers, like the Paravirtual Controller are not supported or seen by the operating system.

 

Disk

Always use Thick provisioning (thick/lazy is acceptable), never Thin-provision disk resources.

Thick provisioning allocates the entire virtual disk prior to use and is fully zeroed out, whereas Thin provisioning virtual disks are neither pre-allocated or pre-zeroed.when a write is issued by a guest to a thin provisioned disk to an unallocated segment it must wait for disk to grow and be zeroed.
For the Security Management which is performing lots of writes such as logs and policy installations, which are critical components of the infrastructure, one would not want IO performance bottlenecks.

Make sure the disk partitions within the guest are aligned.

Unaligned or misaligned partitioning results in the I/O crossing a track boundary, resulting in additional I/O. This incurs a penalty on both latency and throughput. The additional I/O can impact system resources significantly on some host types and some applications - especially disk-intensive applications, such as SmartEvent or heavily loaded logging modules. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement.

For more information and remediation, refer to the documentation of the SAN provider.

 

Memory

Allocate at least 6 GB of memory to the Virtual Machine. For Virtual Machines running Multi-Domain Security Management Server, plan to allocate 6 GB for the base installation plus 2 GB for each additional Domain. Consider reserving 50% of the memory allocated and consider increasing the Virtual Machine's resource shares allocation.

 

vCPUs

In multi-CPU (SMP) guests, the guest operating system can migrate processes from one vCPU to another. This migration can incur a small CPU overhead. If the migration is very frequent, it might be helpful to pin guest threads or processes to specific vCPUs. Allocate only the number of vCPUs as is necessary.

For heavily-subscribed environments, consider reserving at least 30% of the CPU frequency and consider increasing the CM's resource shares allocation.

 

Virtual Network Adapter

The default virtual network adapter emulated in a Virtual Machine is either an AMD PCnet32 device (vlance / "Flexible"), or an Intel E1000 device (E1000). Never utilize the "Flexible" NIC driver in SecurePlatform OS / Gaia OS, as it has been shown to carry a significant performance penalty. In most cases, Check Point recommends the Intel E1000 device be utilized. When configuring the guest Virtual Machine as noted above, this is the default NIC emulation.

VMware also offers the VMXNET family of paravirtualized network adapters. The VMXNET family contains VMXNET, Enhanced VMXNET (available since ESX/ESXi 3.5), and VMXNET Generation 3 (VMXNET3; available since ESX/ESXi 4.0). The latest releases of the Gaia OS include the VMXNET drivers integrated, but R&D recommends against using these drivers except in cases where Check Point Security Gateway VE R77.10 or newer is used.

In some cases, low receive throughput in a Virtual Machine can be caused by insufficient receive buffers in the receiver network device. If the receive ring in the guest operating system's network driver overflows, packets will be dropped in the VMkernel, degrading network throughput. A possible workaround is to increase the number of receive buffers, though this might increase the host physical CPU workload. For VMXNET3 and E1000, the default number of receive and transmit buffers are controlled by the guest driver, with the maximum possible for both being 4096.

 

Time

  1. For the most accurate time keeping, configure the system to use NTP. The VMware Tools time-synchronization option is not considered a suitable solution. Versions prior to ESXi 5.0 were not designed for the same level of accuracy and do not adjust the guest time when it is ahead of the host time. Ensure that the VMware Tools time-synchronization feature is disabled.

  2. Change the timer interrupt rate.

    For Gaia OS and SecurePlatform OS installations, add the following kernel parameters in the /boot/grub/grub.conf file:

    notsc divider=10 clocksource=acpi_pm

    Example:


    • For Gaia 32-bit:

      title Start in normal mode
              root (hd0,0)
              kernel /vmlinuz ro  vmalloc=256M noht notsc divider=10 clocksource=acpi_pm root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
              initrd /initrd
      

    • For Gaia 64-bit:

      title Start in 64bit normal mode
              root (hd0,0)
              kernel /vmlinuz ro  vmalloc=256M noht notsc divider=10 clocksource=acpi_pm root=/dev/vg_splat/lv_current panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
              initrd /initrd
                  

For additional information about best practices for time keeping within Virtual Machines, refer to:

 

I/O Scheduling

As of the Linux 2.6 kernel, the default I/O Scheduler is Completely Fair Queuing (CFQ). Testing has shown that NOOP or Deadline perform better for virtualized Linux guests. ESX uses an asynchronous intelligent I/O scheduler, and for this reason virtual guests should see improved performance by allowing ESX to handle I/O scheduling.

This change can be implemented in a few different methods.

The scheduler can be set for each hard disk unit. To check which scheduler is being used for particular drive, run this command:
# cat /sys/block/<DISK_DEVICE>/queue/scheduler

For example, to check the current I/O scheduler for disk sda:
# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]

In this example, the sda drive scheduler is set to CFQ.

To change the scheduler on a running system, run this command (substitute the SCHEDULER_TYPE with relevant scheduler):
# echo SCHEDULER_TYPE > /sys/block/<DISK_DEVICE>/queue/scheduler

For example, to set the I/O scheduler for disk sda to NOOP:
# echo noop > /sys/block/sda/queue/scheduler

Checking the new I/O setting:
# cat /sys/block/sda/queue/scheduler
[noop] anticipatory deadline cfq

Note: This command will not change the scheduler permanently. The scheduler will be reset to the default on reboot. To make the system use a specific scheduler by default, add an elevator parameter to the default kernel entry in the GRUB boot loader /boot/grub/grub.conf file.

For example, to make NOOP the default scheduler for the system, the /boot/grub/grub.conf kernel entry would look like this:

title Start in normal mode
	root (hd0,0)
	kernel /vmlinuz ro  vmalloc=256M noht root=/dev/vg_splat/lv_current elevator=noop panic=15 console=SERIAL crashkernel=64M@16M 3 quiet
	initrd /initrd

With the elevator parameter in place, the system will set the I/O scheduler to the one specified on every boot.

 

Disk Queue Depth

Changing the disks queue_depth increases the amount of Disk I/O throughput. To check the current seeting, run:

# cat /sys/block/<DISK_DEVICE>/device/queue_depth

For example:

# cat /sys/block/sda/device/queue_depth

32

For each disk presented to the Virtual Machine, change the queue depth as follows:

# echo "[new_value]" > /sys/block/<DISK_DEVICE>/device/queue_depth

For example:

# echo "64" > /sys/block/sda/device/queue_depth

I/O request queue

The I/O request queue is the size of the request - in and out. With the deadline scheduler this should be set to twice the size of the queue_depth. This provided the most improved performance of the size settings for nr_requests.

To check the current seeting:

# cat /sys/block/<DISK_DEVICE>/queue/nr_requests

For example:

# cat /sys/block/sda/queue/nr_requests

64

For each disk presented to the Virtual Machine, change the queue depth as follows:

# echo "[new_value]" > /sys/block/<DISK_DEVICE>/queue/nr_requests

For example:

# echo "128" > /sys/block/sda/queue/nr_requests

 


 

Related documentation:

This solution has been verified for the specific scenario, described by the combination of Product, Version and Symptoms. It may not work in other scenarios.

Give us Feedback
Please rate this document
[1=Worst,5=Best]
Comment