Overview
Oracle Cloud Infrastructure’s security capabilities let you run your mission-critical workloads and keep your data in the cloud with complete control and confidence. Oracle Cloud Infrastructure (OCI) offers best-in-class security technology and operational processes to secure its enterprise cloud services, based on a shared responsibility model. Oracle is responsible for the security of the cloud infrastructure and operations, and you are responsible for securely configuring your workloads to meet your compliance responsibilities.
Check Point is a world-class provider of cyber security solutions to governments and enterprises globally. Check Point CloudGuard Network Security (CGNS) for OCI provides advanced, multilayered security to protect applications from attacks while enabling secure connectivity from enterprise and hybrid cloud networks. It provides consistent security policy management, enforcement, and reporting, allowing customers to move or extend their workloads to OCI painlessly.
Organizations moving or extending their Oracle applications, such as E-Business Suite or PeopleSoft, to OCI can choose Check Point CloudGuard Network Security to inspect traffic and enforce security controls and policies.
This reference architecture provides best practices and recommendations to correctly design and segment Oracle applications that an organization plans to migrate or extend into OCI and apply appropriate security controls.
The security controls include these features:
- Access Controls (Firewall)
- Logging
- Application Control
- Intrusion Prevention (IPS)
- Advanced Threat Prevention (Anti-Virus/Anti-Bot/SandBlast)
- Site-to-Site VPN for communication with the on-premises network
- Remote access VPN for communication with roaming users
- Network address translation for internet bound traffic
Architecture
Reference architecture illustrates how organizations can protect Oracle applications, like Oracle E-Business Suite, PeopleSoft, and other applications deployed in OCI using Check Point’s CloudGuard Network Security with a flexible network load balancer.
To protect these traffic flows, Check Point recommends segmenting the network using a north and south hub and spoke design:
- The north hub protects publicly accessible resources from malicious inbound traffic. The north hub uses the Oracle flexible network load balancer that allows organizations to create a scalable set of CloudGuard Network Security gateways that can be sized appropriately based on throughput requirements.
- The south hub protects the traffic between spokes, traffic egressing to the internet, traffic to the Oracle Services Network, and traffic to or from on-premises networks. We recommend that the south hub contains a highly available cluster of CloudGuard Network Security gateways, so that stateful failover can occur for traffic that is sensitive to interruption.
- Deploy each tier of your application in its own virtual cloud network (VCN), which acts as a spoke. This separation allows for granular control of the traffic between spokes.
- The north hub VCN connects incoming traffic from the Internet to the different spoke VCNs through flexible network load balancer and dynamic routing gateway (DRG).
- The south hub VCN connects to the spoke VCNs through the DRG. All outgoing traffic and traffic between spokes use route table rules to route traffic through the DRG to the south hub for inspection by the CloudGuard Network Security cluster.
- Use one of these methods to manage the environment:
- Centrally manage the environment with a Check Point Security Management Server or Multi-Domain Management Server, deployed either in its own subnet in the north hub VCN or as a pre-existing customer deployment that's accessible to the security gateways.
- Centrally manage the environment from Check Point Smart-1 Cloud management-as-a-service.
The following diagram illustrates this reference architecture:
This image shows an OCI region that includes two availability domains. The region contains four virtual cloud networks (VCNs) in a hub-and-spoke topology connected by a dynamic routing gateway (DRG). The VCNs are arranged as functional layers.
- North hub VCN: The north hub VCN contains a scalable set of two Check Point CloudGuard firewall virtual machines (VMs) with one VM in each availability domain. The north hub VCN also includes Check Point Security Management server platform to manage Check Point CloudGuard Network Security gateways. The north hub VCN includes three subnets: A frontend subnet, a backend subnet, and a network load balancer subnet.
- The frontend subnet uses a primary interface (vNIC1) for inbound internet traffic to or from the Check Point CloudGuard Network Security gateways.
- The backend subnet uses a second interface (vNIC2) for internal traffic to or from the Check Point CloudGuard Network Security gateways.
- The network load balancer subnet let the end user create a private or public flexible network load balancer, which allows on-premises and inbound connection from the Internet.
The North hub VCN includes the following communication gateways:
-
- Internet Gateway: Connects Internet and external web clients to the Check Point CloudGuard Network Security gateways in availability domain 1 through the frontend subnet.
- Dynamic Routing Gateway: Connects the customer data center and customer premises equipment over IPSec VPN or FastConnect to the Check Point CloudGuard Network Security gateways in availability domain 1 through the frontend subnet. In addition the DRG supports communication between VCNs. Each VCN has an attachment to a DRG.
- The north hub VCN includes these flexible network load balancers:
- External network load balancer
- The public load balancer also has frontend interfaces of the Check Point CloudGuard Network Security gateways. Internet traffic connects to this load balancer using an Internet gateway.
- South hub VCN: The south hub VCN contains a high availability cluster of two Check Point CloudGuard Network Security VMs with one VM in each availability domain. The south hub VCN firewall VMs are managed by Check Point Security Management Server deployed in the north hub VCN. The hub VCN includes a frontend subnet and a backend subnet.
- The frontend subnet uses a primary interface (vNIC1) for inbound/internet traffic to or from the Check Point CloudGuard Network Security gateways.
- The backend subnet uses a second interface (vNIC2) for internal traffic to or from the Check Point CloudGuard Network Security gateways.
The South hub VCN includes the following communication gateways:
-
- Internet Gateway: Connects the Internet and external web clients to the Check Point CloudGuard Network Security gateways in availability domain 1 through the frontend subnet.
- Dynamic Routing Gateway: Connection to DRG as an attachment to support inbound traffic from spoke VCNs inspected by Check Point CloudGuard Network Security gateways.
For each traffic flow scenario, make sure that network address translation (NAT) and security policies are configured on the CloudGuard Network Security gateways. The currently supported Flexible Network Load Balancer use case requires that you enable source NAT on the firewalls from which traffic is exiting.
North-South Inbound Traffic flow through the North hub VCN
This diagram illustrates how north-south inbound traffic accesses the web application tier from the internet:
This image shows the north-south inbound traffic flow between the north hub VCN and the web or application (spoke) VCN in a region that uses Check Point CloudGuard Network Security gateways. The OCI region includes two availability domains. The region contains a north hub VCN and a single spoke VCN (web or application tier) connected by a dynamic routing gateway (DRG).
- North Hub VCN (10.1.0.0/16): The north hub VCN contains a cluster of two Check Point CloudGuard Network Security gateways virtual machines (VMs) with one VM in each availability domain. The north hub VCN also includes Check Point Security Management server platform to manage Check Point CloudGuard Network Security gateways. The north hub VCN includes three subnets: A frontend subnet, a backend subnet, and a network load balancer subnet.
- The frontend subnet uses a primary interface (vNIC1) for inbound internet traffic to or from the Check Point CloudGuard Network Security gateways.
- The backend subnet uses a second interface (vNIC2) for internal traffic to or from the Check Point CloudGuard Network Security gateways.
- The network load balancer subnet allows an end user to create a private or public flexible network load balancer, allowing on-premises and inbound connection from the internet.
Inbound traffic enters the hub VCN from external sources through the external network load balancer public IP to the Check Point CloudGuard Network Security gateways:
-
- Internet gateway: Traffic from the Internet and external web clients routes to the external public network load balancer. It then goes to one of the Check Point CloudGuard Network Security gateways. The network load balancer has a public address, which allows you to connect from the outside. The default route allow destination CIDR is 0.0.0.0/0 (all addresses) and the first host IP address in the outside subnet CIDR).
- One of Check Point CloudGuard Network Security gateways inspects traffic. You need to configure the source NAT so that traffic existing from the firewall has the backend interface IP address of the firewall interfaces. The destination is the spoke VCN VMs and load balancer where you want to send the traffic.
- Based on the backend route table, traffic goes to the DRG because the spoke VCN has a DRG attachment.
- DRG: Traffic from the inside subnet to the spoke VCN is routed over the DRG.
- Application or web: If traffic is destined to this spoke VCN, it’s routed through the DRG application or web VCN attachment connection.
- Database: If traffic is destined to this spoke VCN, it’s routed through the DRG database VCN attachment connection.
- Web or application tier spoke VCN (10.0.0.0/24): The VCN contains one subnet. An application load balancer manages traffic between web and application VMs in each availability domain. Traffic from the north hub VCN to the application load balancer is routed over the dynamic routing gateway to the application load balancer. The spoke subnet destination CIDR is routed through the DRG as the default subnet 0.0.0.0/0 (all addresses).
- Outgoing connections go from the web application and database tiers to the Internet for software updates and access to external web services. Ensure that the hide NAT is configured in your Check Point security policy for the relevant networks.
North-South outbound traffic flow through the South Hub VCN
This diagram illustrates how outgoing connections from the web application and database tiers to the Internet provide software updates and access to external web services:

This image shows the north-south outbound traffic flow from the web or application (spoke) VCN through the hub VCN in a region that uses a Check Point CloudGuard Network Security gateways cluster.
The OCI region includes two availability domains. The region contains a south hub VCN and one spoke VCN (web or application tier) connected by dynamic routing gateway (DRG) attachments.
- Spoke (web or application) VCN (10.0.0.0/24): The VCN contains one subnet. An application load balancer manages traffic between the web or application VMs in each availability domain. Outbound traffic from the application load balancer to the south hub VCN is routed over the DRG. The spoke subnet destination CIDR is 0.0.0.0/0 (all addresses) through the DRG.
- South hub VCN (192.168.0.0/16): The south hub VCN contains a high availability cluster of two Check Point CloudGuard Network Security gateway virtual machines (VMs) with one VM in each availability domain.
- The south hub VCN includes a frontend subnet and a backend subnet. :
- The frontend subnet uses the primary interface (vNIC1) to allow end users to connect to the user interface and support outbound traffic through this subnet.
- The backend subnet uses the secondary interface vNIC2 for internal traffic to or from the Check Point CloudGuard Network Security gateways.
Traffic from spoke VCNs to outbound goes as follows: Outbound traffic from the spoke (web or application) VCN enters the south hub VCN, which sends the traffic to the active cluster member.
- Check Point CloudGuard Network Security gateway backend interface, and then out through the frontend subnet to external targets. Check Point CloudGuard Network Security gateways: Traffic from the DRG is routed through the secondary virtual IP (VIP) of vNIC2 to the active Check Point CloudGuard Network Security gateways backend interfaces through the backend subnet and the south hub VCN gateways to external targets.
- Internet gateway: Traffic to Internet and external web clients is routed through an internet gateway. The frontend subnet destination CIDR for the Internet gateway is 0.0.0.0/0 (all addresses).
- Dynamic routing gateway: Traffic to the customer data center and between VCNs is routed through a dynamic routing gateway. The frontend subnet destination CIDR for the dynamic routing gateway is 172.16.0.0/12. In addition, the DRG supports communication between VCNs. Each VCN has an attachment to a DRG.
East-West traffic flow (Web to Database) flow through the South Hub VCN
This diagram illustrates how traffic moves from the web application to the database tier:

This image shows the east-west traffic flow from the web or application to the database in a regional hub and spoke topology that uses a Check Point CloudGuard firewall. It includes three virtual cloud networks (VCNs):
- South hub VCN (192.168.0.0/16): The south hub VCN houses the Check Point CloudGuard Network Security gateways. The backend subnet uses vNIC2 for internal traffic to or from the Check Point CloudGuard Network Security gateways. The south hub VCN communicates with spoke VCNs through a dynamic routing gateway (DRG).
- Web or application tier spoke VCN (10.0.0.0/24): The VCN contains a single subnet. A load balancer manages traffic to the web or application VMs. The application tier VCN is connected to the hub VCN over the DRG.
- Database tier spoke VCN (10.0.1.0/24): The VCN contains one subnet that contains the primary database system. The database tier VCN is connected to the hub VCN over the DRG.
East-west traffic flows from the web or application to the database in the following steps:
- Traffic that moves from the web or application tier to the database tier (10.0.1.10) is routed through the web or application subnet route table (destination 0.0.0.0/0).
- Traffic moves from the web or application subnet route table to the DRG for the database tier spoke VCN.
- Traffic moves from the DRG through the south hub VCN ingress route table to the Check Point CloudGuard Network Security gateway VMs using the secondary IP of vNIC2.
- Traffic from the active Check Point CloudGuard Network Security gateway is routed through the backend subnet route table (destination: 10.0.1.0/24).
- Traffic moves from the backend subnet route table to the DRG for the database spoke VCN.
- Traffic moves from the DRG for the database system through database spoke VCN attachment.
East-West traffic flow (Database to Web) through the South Hub VCN
This diagram illustrates how traffic moves from the database tier to the web application:
This image shows the east-west traffic flow from the database to the web or application in a regional hub and spoke topology that uses a Check Point CloudGuard Network Security gateway. It includes three virtual cloud networks (VCNs):
- South hub VCN (192.168.0.0/16): This VCN houses the Check Point CloudGuard Network Security gateways. The backend subnet uses vNIC2 for internal traffic to or from the Check Point CloudGuard Network Security gateway. The south hub VCN communicates with spoke VCNs through a dynamic routing gateway (DRG).
- Web or application tier spoke VCN (10.0.0.0/24): The VCN contains one subnet. A load balancer manages traffic to the web or application VMs. The application tier VCN is connected to the hub VCN over the DRG.
- Database tier spoke VCN (10.0.1.0/24): The VCN contains one subnet that contains the primary database system. The database tier VCN is connected to the hub VCN over the DRG.
East-west traffic flows from the database to the web or application in the following steps:
- Traffic that moves from the database tier to the web or application load balancer (10.0.0.10) is routed through the database subnet route table (destination 0.0.0.0/0).
- Traffic moves from the database subnet route table to the DRG for the database tier spoke VCN.
- Traffic moves from the DRG through the south hub VCN ingress route table to Check Point CloudGuard Network Security gateway VMs using the secondary IP of vNIC2.
- Traffic from the active Check Point CloudGuard Network Security gateway is routed through the backend subnet route table (destination 10.0.0.0/16).
- Traffic moves from the backend subnet route table to the DRG for the web spoke VCN.
- Traffic moves from the DRG for the web or application load balancer through the web spoke VCN attachment.
East-West traffic flow (Web application to Oracle Services Network) through the South Hub VCN
This diagram illustrates how traffic moves from the web application to the Oracle Services Network:

This image shows the east-west traffic flow from the web or application to OCI Object Storage and other Oracle Services Network in a regional hub and spoke topology that uses a Check Point CloudGuard Network Security gateway. It includes two virtual cloud networks (VCNs):
- South hub VCN (192.168.0.0/16): The south hub VCN houses the Check Point CloudGuard Network Security high availability cluster. The backend subnet use vNIC2 interface for internal traffic to or from the CloudGuard Network Security gateway. This interface is part of the backend subnet. The south hub VCN communicates with spoke VCNs through a dynamic routing gateway (DRG). The south hub VCN communicates with OCI Object Storage through a service gateway.
- Web or application tier spoke VCN (10.0.0.0/24): The VCN contains one subnet. An application load balancer manages traffic to the web or application VMs. The application tier VCN is connected to the south hub VCN through the DRG.
East-west traffic flows from the web or application to OCI Object Storage in the following steps:
- Traffic that moves from the web or application tier to Object Storage is routed through the web or application subnet route table (destination 0.0.0.0/0).
- Traffic moves from the web or application subnet route table to the DRG for the Object Storage traffic.
- Traffic moves from the DRG through the south hub VCN ingress route table to the Check Point CloudGuard Network Security gateway in the backend subnet over vNIC2 through the secondary floating IP of vNIC2.
- Traffic from the Check Point CloudGuard Network Security gateways is routed through the backend subnet route table (destination Oracle Network Services).
- Traffic moves from the backend subnet route table to the service gateway.
- Traffic moves from the service gateway to Oracle Services Network, such as OCI Object Storage.
East-West traffic flow (Oracle Services Network to web application) through the South Hub VCN
This diagram illustrates how traffic moves from the Oracle Services Network to the web application:

This image shows the east-west traffic flow from OCI Object Storage and other Oracle Services Network to the web application in a regional hub and spoke topology that uses a Check Point CloudGuard Network Security gateway. It includes two virtual cloud networks (VCNs):
- South hub VCN (192.168.0.0/16): The south hub VCN houses the Check Point CloudGuard Network Security high availability cluster. The backend subnet use vNIC2 interface for internal traffic to or from the CloudGuard Network Security gateway. This interface is part of the backend subnet. The south hub VCN communicates with spoke VCNs through dynamic routing gateway. The south hub VCN communicates with OCI Object Storage through a service gateway.
- Web or application tier spoke VCN (10.0.0.0/24): The VCN contains a single subnet. An application load balancer manages traffic to the web or application VMs. The application tier VCN is connected to the south hub VCN through a dynamic routing gateway (DRG).
East-west traffic flows from OCI Object Storage to the web or application in these steps:
- Traffic that moves from Object Storage to the web or application VM (10.0.0.10) is routed through the service gateway route table (destination 0.0.0.0/0) in the south hub VCN.
- Traffic moves from the service gateway to the Check Point CloudGuard Network Security gateways in the backend subnet over vNIC2 through the secondary floating IP of vNIC2.
- Traffic from Check Point CloudGuard Network Security gateways is routed through the backend subnet route table (destination 10.0.0.0/24).
- Traffic moves from the backend subnet route table to the DRG.
- Traffic moves from DRG for the web or application tier spoke VCN.
- Traffic moves from DRG web VCN attachment to the load balancer for the web or application.
This architecture has these components:
- Check Point CloudGuard Network Security gateways
- Provides advanced threat prevention and cloud network security for hybrid clouds.
- Check Point Security Management
- Security Management Server
- Multi-Domain Management
- Smart-1 Cloud Management-as-a-Service
- Oracle E-Business Suite or PeopleSoft application tier
- Oracle E-Business Suite or PeopleSoft application servers and file system
- Oracle E-Business Suite or PeopleSoft database tier
- Composed of Oracle Database, but not limited to Oracle Exadata Database Cloud service or Oracle Database services.
- Region
- A region is a localized geographic area composed of one or more availability domains. Regions are independent of other regions, and vast distances can separate them (across countries or continents).
- Availability domains
- Availability domains are standalone, independent data centers in a region. The physical resources in each availability domain are isolated from the resources in the other availability domains, which provides fault tolerance. Availability domains do not share infrastructure, such as power or cooling, or the internal availability domain network. Therefore, a failure at one availability domain is unlikely to effect the other availability domains in the region.
- Fault domains
- A fault domain is a grouping of hardware and infrastructure in an availability domain. Each availability domain has three fault domains with independent power and hardware. When you place Compute instances across multiple fault domains, applications can tolerate physical server failure, system maintenance, and many common networking and power failures in the availability domain.
- Virtual cloud network (VCN) and subnets
- A VCN is a customizable, private network that you set up in an OCI region. Like traditional data center networks, VCNs gives you complete control over your network environment. You can segment VCNs into subnets, which can be scoped to a region or an availability domain. Both regional subnets and availability domain-specific subnets can coexist in the same VCN. A subnet can be public or private.
- Hub VCN
- A centralized network where the Check Point CloudGuard NSGs are deployed. It provides secure connectivity to all spoke VCNs, OCI services, public endpoints and clients, and on-premises data center networks.
- Web application tier spoke VCN
- The web application tier spoke VCN contains a private subnet to host Oracle E-Business Suite or PeopleSoft components.
- Database tier spoke VCN
- The database tier spoke VCN contains a private subnet for hosting Oracle databases.
- Load balancer
- The Oracle Cloud Infrastructure Load Balancing service provides automated traffic distribution from a single entry point to multiple servers in the backend.
- Security list
- For each subnet, you can create security rules that specify the source, destination, and traffic type that must be allowed in and out of the subnet.
- Route tables
- Virtual route tables contain rules to route traffic from subnets to destinations outside a VCN, typically through gateways. In the north hub VCN, you have the following route tables:
- The network load balancer route table attached to the network load balancer subnet points to the CIDR block of on-premises subnet through DRGs and has a default route to connect to internet gateway.
- Frontend route table attached to the frontend subnet, which has a default route connected to Internet gateway for routing traffic from the north hub VCN to Internet or on-premises targets.
- Backend route table attached to the backend subnet pointing to the CIDR block of the spoke VCNs through DRGs.
- In the south hub VCN, you have these route tables:
- Frontend route table attached to the frontend subnet which has a default route connected to internet gateway for routing traffic from the south hub VCN to internet or on-premises targets.
- Backend route table attached to the backend subnet pointing to the CIDR block of the spoke VCNs through dynamic routing gateways.
- South Hub VCN Ingress route table is attached to hub VCN attachment to send any incoming traffic from spoke VCNs through the dynamic routing gateway to the secondary floating IP address of primary CloudGuard Network Security Gateway backend interface.
- A distinct route table is defined and attached to an associated subnet for each spoke attached to the North hub through dynamic routing gateways. That route table forwards all traffic (0.0.0.0/0) from the associated spoke VCN to dynamic routing gateways through the secondary floating IP address of primary CloudGuard Network Security Gateway backend interface, or you can define it at granular level too.
- The Oracle service gateway route table is attached to the Oracle service gateway for Oracle Service Network communication. That route forwards all traffic (0.0.0.0/0) to the secondary floating IP address of primary CloudGuard Network Security gateway backend interface.
- To maintain traffic symmetry, routes are also added to each Check Point’s CloudGuard Network Security gateway to point the CIDR block of spoke traffic to backend (internal) subnet’s default gateway IP (default gateway IP available in the backend subnet on the South hub VCN) and default CIDR block (0.0.0.0/0) pointing to the frontend subnet default gateway IP.
- On the DRG, you have these route tables:
- For each spoke VCN attachment, you have a DRG route table associated to make sure that traffic goes to south hub VCN. Add a route rule to make sure that traffic destined to backend subnet of north hub VCN follow the same path where traffic came from.
- For the south hub VCN attachment, an associated DRG route table ensures that imported routes from each VCNs attached to DRG are part of this route table.
- Internet gateway
- The internet gateway allows traffic between the public subnets in a VCN and the public internet.
- NAT gateway
- The NAT gateway enables private resources in a VCN to access hosts on the Internet without exposing those resources to incoming Internet connections.
- Local peering gateways (LPG)
- An LPG enables you to peer one VCN with another VCN in the same region. Peering means the VCNs communicate using private IP addresses, without the traffic traversing the Internet or routing through your on-premises network.
- Dynamic routing gateway (DRG)
- The DRG is a virtual router that provides a path for private network traffic between a VCN and a network outside the region, such as a VCN in a different Oracle Cloud Infrastructure region, an on-premises network, or a network in another cloud provider.
- Service gateway
- A service gateway is required for communicating with Oracle services, such as infrastructure, PaaS, SaaS, from the hub VCN, or on-premises network.
- FastConnect
- Oracle Cloud Infrastructure FastConnect provides an easy way to create a dedicated, private connection between your data center and OCI. FastConnect provides higher-bandwidth options and a more reliable networking experience when compared with internet-based connections.
- Virtual network interface card (VNIC)
- The services in Oracle Cloud Infrastructure data centers have physical network interface cards (NICs). Virtual machine (VM) instances communicate using virtual NICs (VNICs) associated with the physical NICs. Each instance has a primary VNIC that is automatically created, attached during launch and is available during the instance’s lifetime. Dynamic host configuration protocol is offered to the primary VNIC only. You can add secondary VNICs after instance launch. Set static IPs for each interface.
- Private IPs
- A private IPv4 address and related information for addressing an instance. Each VNIC has a primary private IP, and you can add and remove secondary private IPs. The primary private IP address on an instance is attached during instance launch and does not change during the instance’s lifetime. Secondary IP addresses also belong to the same CIDR of the VNIC’s subnet. The secondary IP is used as a floating IP because it can move between different VNICs on different instances in the same subnet. You can also use it as a different endpoint to host different services.
- Public IPs
- The networking services define a public IPv4 address selected by Oracle that is mapped to a private IP.
- Ephemeral: This address is temporary and exists for the instance's lifetime.
- Reserved: This address persists beyond the lifetime of the instance. It can be unassigned and reassigned to a different instance.
- Source and destination check
- Each VNIC performs the source and destination check on its network traffic. This flag must be disabled to allow CloudGuard to inspect traffic between the hub and spokes.
- Compute shape
- The shape of a Compute instance specifies the number of CPUs and amount of memory allocated to the instance. The Compute shape also determines the number of VNICs and maximum bandwidth available for the compute instance.
Recommendations
Use these recommendations as a starting point to secure Oracle E-Business Suite or PeopleSoft workloads or application workloads on OCI with Check Point CloudGuard Network Security Gateway. Your requirements might be different from the architecture described here.
- VCN
- When you create the VCN, determine how many IP addresses your cloud resources in each subnet require. Using Classless Inter-Domain Routing (CIDR) notation, specify a subnet mask and a network address range large enough for the required IP addresses. Use an address space that is within the standard private IP address blocks.
- Select an address range that does not overlap with your on-premises network, so that you can set up a connection between the VCN and your on-premises network later, if necessary.
- When you design the subnets, consider functionality and security requirements. All compute instances in the same tier or role go into the same subnet.
- Use a regional subnet.
- Verify the maximum number of LPGs per VCN in your service limits when you want to extend this architecture for multiple environments and applications.
- Check Point CloudGuard Network Security
- Deploy a high availability cluster. Follow CloudGuard Network for Oracle Cloud Getting Started Guide for best practices.
- When possible, deploy in distinct fault domains at a minimum or different availability domains.
- Make sure that MTU is set to 9000 on all VNICs.
- Utilize SRIOV and VFIO interfaces.
- Create a second hub-spoke topology for disaster recovery or geo-redundancy in a different region.
- Do not restrict traffic through security lists or NSGs because the security gateway secures all traffic.
- By default, ports 443 and 22 are open on the gateway, and more ports are open based on security policies.
- Check Point Security Management
- If you are doing a new deployment hosted in OCI, create a dedicated subnet for management.
- Deploy a secondary management server (management for high availability) in a different availability domain or region.
- Use security lists or NSGs to restrict inbound access to ports 443, 22, and 19009 sourced from the Internet to administrate the security policy and view logs and events.
- Create either a security list or NSG, allowing ingress and egress traffic to the security gateways from the security management server.
- Check Point Security Policies
- Refer to the following documentation for the most up-to-date information on required ports and protocols that need to be accessible, depending on Oracle application:
Considerations
- Performance
- These factors effect performance:
- Selecting the proper instance size, determined by the Compute shape, determines the maximum available throughput, CPU, and RAM.
- Organizations need to know what traffic types traverse the environment, determine the appropriate risk levels, and apply proper security controls as required. Different combinations of enabled security control impact performance.
- Consider using large Compute shapes for higher throughput
- Run performance tests to validate the design can sustain the required performance and throughput.
- Security
- Deploying Check Point Security Management in OCI allows for centralized security policy configuration and monitoring of all physical and virtual Check Point Security Gateway instances.
- For existing Check Point customers, migrating Security Management to OCI is also supported.
- Configure distinct Identity and Access Management (IAM) dynamic group or policy per cluster deployment.
- Availability
- Deploy your architecture to distinct geographic regions for greatest redundancy.
- Configure site-to-site VPNs and/or FastConnect with relevant organizational networks for redundant connectivity with on-premises networks.
- Cost
- Check Point CloudGuard is available in bring-your-own-license (BYOL) and pay-as-you-go license models for Security Management and Security Gateways in the Oracle Cloud Marketplace.
- Check Point CloudGuard Network Security Gateway licensing is based on the number of vCPUs (one OCPU is equivalent to two vCPUs).
- Check Point BYOL licenses are portable between instances. For example, if you are migrate workloads from other public clouds that also use BYOL licenses, you do not need to purchase new licenses from Check Point. Check with your Check Point representative if you have questions or need verification of your license status.
- Check Point Security Management is licensed per managed security gateway. For example, two clusters count as four toward the Security Management license.
More InformationFor more information, see these resources: