Support Center > Search Results > SecureKnowledge Details
Deploying a Check Point Cluster in Oracle Cloud Infrastructure (OCI) Technical Level
Solution

Note:  The current recommended version for Oracle Cloud is R80.40 with the latest GA Jumbo.

Table of Contents:

  1. Overview
  2. Prerequisites
  3. Method of Operation
  4. Solution Topology
  5. CloudGuard Cluster Deployment
  6. Configuring OCI Cluster in Check Point Security Management
  7. Known Limitations
  8. Related Documentation
  9. Adding Additional Secondary IPs to OCI Cluster

[1] Overview

Oracle Cloud Infrastructure combines the elasticity and utility of public cloud with the granular control, security, predictability and bringing the agility and fast-paced innovation of cloud computing - performance, high availability and cost-effective infrastructure services

Check Point CloudGuard for Oracle extends advanced Threat Prevention security to protect customers' OCI environments from malware and other sophisticated threats. As a Oracle certified solution, CloudGuard enables you to easily and seamlessly secure your workloads, data and assets while providing secure connectivity across your cloud and on-premises environments.

This article will guide you in deploying a Check Point CloudGuard Cluster HA in Oracle Cloud Infrastructure.

[2] Prerequisites

It is assumed that the user is familiar with general Oracle concepts, features, and terms. These include the following:

[3] Method of Operation

A traditional Check Point cluster environment uses multicast or broadcast in order to perform state synchronization and health checks across cluster members.

Since multicast and broadcast are not supported in Oracle, the Check Point cluster members communicate with one another using unicast. In addition, in a regular ClusterXL working in High Availability mode, cluster members use Gratuitous ARP to announce the MAC Address of the Active member associated with the Virtual IP Address (during normal operation and when cluster failover occurs). 

In contrast, Oracle implements this by making API calls to OCI. When an Active cluster member fails, the Standby cluster member is promoted to Active and takes ownership of the cluster resources. As part of this process the member:

  • Associates the cluster's Private and Public Secondary IP addresses attached to the Primary vNIC
  • Associates any pair of Secondary Public/Private IP attached to the Primary vNIC (for any published service)
  • Associates the Secondary Private IP attached to the Secondary vNIC

Oracle API Authentication

In order to make API calls to Oracle automatically, the cluster members need permission to perform the API calls in the actual compartment. This is achieved using Oracle Identity Manager.

In this article, we will guide you in doing the following:

  • Create a Dynamic Group with a proper rule defining only both cluster members as part of the Dynamic Group
  • Create a policy for the defined Dynamic Group

[4] Solution Topology

To best explain the configuration steps, we will use the following example environment. When you follow the configuration steps below, make sure to replace the IP addresses in the example to reflect your environment.

[5] CloudGuard Cluster Deployment

Show / Hide this section
Follow the instructions below in order to deploy Check Point's CloudGuard Cluster solution in Oracle. Perform the steps from the Oracle portal using the preferred compartment. For more information, such as how to generate an SSH Key pair, instance creation, terms, and more, refer to the CloudGuard for Oracle Cloud Infrastructure Getting Started Guide.

1. Sign in to cloud tenant - your OCI tenant account.

2. Select the relevant CloudGuard listing from the Oracle Cloud Marketplace or upload the CloudGuard image to one of your storage accounts.

Skip step 3 if using a Marketplace listing

3. Import the image into your custom images.

4. Create VCN (for example VCN with CIDR Block 10.0.0.0/16 ). 


5. Add two subnets to your VCN: one public subnet and one private subnet.

  • Frontend public subnet (10.0.0.0/24)
  • Backend private subnet (10.0.1.0/24)

Go to the VCN you created in the previous stage and add two subnets: public subnet (frontend) and private subnet (backend).

6. Create a public subnet: for example, frontend (CIDR Block 10.0.0.0/24). 

7. Create a private subnet: for example,  backend (CIDR Block 10.0.1.0/24).

Final VCN configuration with two subnets: frontend and backend.

    8. Set the following Egress and Ingress Rules for your VCN.


    VCN Full Configuration:

    9. Create both CloudGuard cluster members. 



    Note: The "require an authorization header" option must be disabled in advanced options during instance creation

    10. Attach the first instance Primary vNIC (at the member creation) to your frontend subnet.

    11. Add an additional Secondary vNIC and attach it to the Private Subnet (backend) you created in the previous step.


    Note: During the creation of instances, you can use a user-defined script which will be executed at the first boot. You can use this script to complete the  FTW(while using a non-blink image), the configuration (while using a blink image),or for any other purpose.
    - All R80.30 and above Oracle Cloud images use Blink

    12. Choose one of the members (only one) and add a new Secondary Private IP to the Primary vNIC.

    13. Create a reserve Public IP and attach it to the Secondary Private IP you created in step 12. This will serve as the cluster IP (first VIP use for VPN tunnel).

    14. Create one more Secondary private IP and attach it to the member Secondary vNIC of the chosen member from step # 12 (secondary VIP for outbound traffic).

    15. Add the following new routing tables to the Private Subnet (the backend subnet, which should be configured after adding the Additional Secondary vNIC) and the Public subnet, respectively. This rule redirects the traffic to the Secondary Private IP of the Secondary vNIC (traffic goes through the VIP).

    Note: Instances are created with only one vNIC, which is called the Primary vNIC. After the instance creation, one more vNIC should be added to this vNIC, which is called the Secondary vNIC.

    Note: The Primary vNIC should be connected/attached to the public subnet; the additional vNIC should be connected to the private subnet.

    Note: It is very important that you edit both vNICs of each member and click on the check box for skip Source/destination Check.

    16. Add the following Route Table to the Public Subnet (Frontend).

    17. Create a Dynamic Group and include both members in this Dynamic Group (in this example, we will name it cp_cluster_group). You can create the rules which define the Dynamic Group by using the OCI Rule Builder: create two separate rules, one for each member. If you are not using the OCI Rule Builder, you can manually define a single rule to include both members, as appears below.

    18. Create the policy and allow the defined Dynamic Group to use resources in the compartment to which it belongs.

    19. Connect to both CloudGuard members using the Private Key match to the Public Key you used when you created the instance (ssh –i privateKey admin@<cluster-member-public-ip>) and set the password by running the following command:

    > set user admin password

    - insert your password <XXXXX>

    > save config

    > exit

    20. Connect to the members using a web browser with the member public IP and complete the FTW.

    https://<member_public_ip>

    User name : admin

    Password: XXXXX

    21. Configure the CloudGuard members and Cluster in the Management SmartConsole (see below).

    Note: In order to set an administrator password, you can ssh to each member as described above OR use the user-defined script while you create the instance. You can launch the script at first boot.

    [6] Configuring OCI Cluster in Check Point Security Management

    Show / Hide this section

    CloudGuard Gateway

    The CloudGuard Security Gateway can be managed in several ways, including the following:

    • A standalone configuration in which the Security Gateway acts as its own management.
    • Centrally managed, in which the management server is located on-premises outside the virtual network. 
    • Centrally managed, in which the management server is located in the same virtual network.

    CloudGuard Cluster Configuration

    1. Connect with Check Point SmartConsole to the Check Point Management Server.

    2. Create a new Check Point Cluster: in the Cluster menu, click on Cluster...

    3. Select Wizard Mode.

    4. Insert the cluster object's name (e.g., checkpoint-oci-cluster). In the Cluster IPv4 Address field, enter the public address (Secondary Public IP address of the Primary vNIC) allocated to the cluster and click on the Next button.

    Note: To see the Cluster IP address in the OCI portal, select the CloudGuard Active Member's Primary VNIC and then choose the Secondary Public IP (Secondary Public IP of the Primary vNIC; Primary vNIC is the first vNIC of the deployed instance).

    Example cluster configuration:

    5. Click on the Add button to add the cluster members.

    6. Configure cluster member properties:

    1. In the Name field, insert the first cluster member's name (e.g., member1).
    2. In the IPv4 Address field: If you are managing the cluster from the same VCN, then insert the member's Primary Private IP address of the Primary vNIC. Otherwise, insert the member's Primary public IP address of the Primary vNIC.
    3. In the Activation Key field, insert the SIC (Secure Internal Communication) key you defined forthe CloudGuard member during FTW configuration.
    4. In the Confirm Activation Key field, re-enter the key and click on InitializeThe Trust State field should show: "Trust established."                                                                                              
    5. Click OK.

    Example:

    7. Repeat steps 5-6 to add the second CloudGuard cluster member. Click on the Next button.

    Example:

    8. In the new window, click on Finish:

    9. Click on the Finish button.

    10. Review the cluster configuration and configure the cluster interfaces:

    1. Click on cluster object checkpoint-oci-cluster.
    2. Click on Network Management.
    3. Double-click on eth0.
    4. Click on General.
    5. Choose Network Type Cluster and insert the member's Secondary private IP address of the Primary vNIC (definition of first VIP).
    6. Click OK.
    7. In Network Management, double-click on eth1.
    8. Click on General.
    9. Choose Network Type "Cluster + Sync" and insert the member's Secondary private IP address of the Secondary vNIC (definition of the second VIP).
    10. Click OK and exit the cluster object configurations dialog.

    Example:

     

    11. To provide Internet connectivity to the internal subnet (publish services) use NAT Rules.

    12. Configure and install the Security policy on the cluster.

    [7] Known Limitations

    Show / Hide this section
    • NTP must be configured in order for failover to work properly > Oracle API requirements
    • CloudGuard Controller is not yet supported for OCI.
    • To inspect East/West traffic, each backend subnet that requires inspection needs to exist in its own VCN and routed to the backend VNIC via LPGs.
    • You can assign up to 32 pairs of private and public IPs for publish services.

    Show / Hide this section

    [9] Adding Additional Secondary IPs to OCI Cluster

    Show / Hide this section
    If secondary IPs other than the cluster IP need to be attached to the master, the following must be done:

    1. Attach all desired secondary IPs to the master in OCI console.

    2. Push policy to the gateways.

    Give us Feedback
    Please rate this document
    [1=Worst,5=Best]
    Comment