When VRRP cluster is configured per sk92061 - How to configure VRRP on Gaia (i.e., the "ClusterXL" is enabled in the cluster object for State Synchronization), a maximum of 127 VRRP interfaces are supported.
This because the same cluster member must be the VRRP Master for all virtual routers (VRIDs) to avoid an Active/Active scenario. A maximum of one VRID with one VIP address per interface is supported. And each VRID must be configured to monitor every other VRRP-enabled interface along with priority deltas that facilitate complete failover to the VRRP Backup cluster member.
With a maximum priority of 254 and minimum priority delta of 2 per monitored interface, the maximum number of interfaces possible is 127.
When VRRP cluster is configured per sk92061 - How to configure VRRP on Gaia (i.e., the "ClusterXL" is enabled in the cluster object for State Synchronization), only Active/Passive environments are supported.
This allows only one VRID with one VIP address per interface. The same cluster member must be the VRRP Master for all VRIDs configured to avoid an Active/Active scenario. As such, each VRID must be configured to monitor every other VRRP-enabled interface along with priority deltas that facilitate complete failover to the VRRP Backup cluster member.
The maximum qualified number of cluster members is two. Each cluster member can be the VRRP Master for one of two VRIDs on an interface at the same time. Only Static Routes are supported with this configuration. Important Note: "Monitor Firewall State" must be disabled with this configuration.
VRRP v2 is defined in RFC 2338. This is an excerpt from this RFC:
This memo defines the Virtual Router Redundancy Protocol (VRRP). VRRP specifies an election protocol that dynamically assigns responsibility for a virtual router to one of the VRRP routers on a LAN. The VRRP router controlling the IP address(es) associated with a virtual router is called the Master, and forwards packets sent to these IP addresses. The election process provides dynamic fail over in the forwarding responsibility should the Master become unavailable. This allows any of the virtual router IP addresses on the LAN to be used as the default first hop router by end-hosts. The advantage gained from using VRRP is a higher availability default path without requiring configuration of dynamic routing or router discovery protocols on every end-host.
VRRP Monitored Circuit (VRRP MC) was created to enhance the performance of VRRP. It eliminates asymmetric routes by enabling VRRP running on each network interface to monitor the state of any other network interface. VRRP MC is specific to IPSO.
Master_Down_IntervalTime interval for Backup to declare Master down (seconds). Calculated as:
(3 * Advertisement_Interval) + Skew_time
Skew_Time: Time to skew Master_Down_Interval in seconds. Calculated as:
( (256 - Priority) / 256 )
This means that the greater the priority, then the smaller the skew_time, which results in a faster transition. Therefore, using a priority of 254 will result in faster transition times.
The next question would be to ask, "Why is it designed this way?". In VRRP v2, the master always has a priority of 255. A backup may have a priority =< 254. If there was a power failure and both platforms rebooted, this gives the master the edge in reclaiming its position in the VRRP hierarchy.
Typically, failover occurs between 3-4 seconds, using default values.
Chances are that the effective priority of the "disabled" platform is still greater than the effective priority of the backup platform. Use Network Voyager to look at the VRRP Monitor Page to verify this. The priority delta should be configured in such a way that the system will drop to backup, if only one interface is down on the master.
The Cold Start Delay was introduced to solve one particular problem: the IP Series platform running IPSO OS was able to become a VRRP Master before Check Point FireWall could fully start. In a Active/Passive VRRP configuration, where one platform was the established VRRP Master, this situation is likely to occur when the VRRP Master is rebooted. As soon as it is able, the VRRP Master would take back the VRRP IP addresses. This would result in established sessions being dropped by the VRRP Master because its state tables did not have the necessary entries.
Enabling the Cold Start delay, enabled Check Point FireWall to startup and synchronize with the other cluster member before a VRRP transition would cause all sessions to route through the VRRP Master.
An established VRRP Master is the cluster member configured with a priority that is greater than the priority of the VRRP Backup(s). A good reason for this configuration is when the VRRP Master is a more robust platform than the VRRP Backup platform. However, if the two cluster members are identical, either one can be the VRRP Master. Then, the VIP priorities can be the same on each platform. It will usually be the first cluster member, which comes online.
It is possible, though improbable, that during a simultaneous reboot of VRRP cluster members, one cluster member becomes the VRRP Master for the external VIP address and the other cluster member becomes the VRRP Master of the internal VIP address.
There is a tie-breaking algorithm used in situations like this to prevent this very occurrence. Should both VRRP cluster members see VRRP "HELLO" packets from each other at the same time, the cluster member with the numerically greater IP address becomes the VRRP Master. For this reason, it would be wise for all interfaces on one cluster member to be numerically greater than the interfaces on the peer cluster member. For example, Member_A should be the X.X.X.2 host and Member_B should be the X.X.X.1 host on all connected interfaces.
ClusterXL runs in Sync-only mode and both cluster members are considered "Active". Output of the "cphaprob state" command would look like this:
HostName[admin]# cphaprob state
Cluster Mode: Sync only (OPSEC) with IGMP Membership
Number Unique Address Firewall State (*)
1 (local) 188.8.131.52 Active
2 184.108.40.206 Active
(*) FW-1 monitors only the sync operation and the security policy. Use OPSEC's monitoring tool to get the cluster status.