Support Center > Search Results > SecureKnowledge Details
How to replace a Quantum Maestro Orchestrator Technical Level
Solution

Follow the steps below to replace (RMA) a failed Quantum Maestro Orchestrator.

Important Notes

  • We recommend to schedule a maintenance window.
  • In a Dual Site deployment, perform the RMA procedure on the Standby site.

The procedure below uses these Orchestrator IDs:

  • ID of the operational Orchestrator - 1_1
  • ID of the failed Orchestrator - 1_2

 

Procedure for Orchestrators that run the R80.20SP version

Show / Hide this section
  1. Export the configuration files of the failed Orchestrator:

    • If the Orchestrators run R80.20SP with the Jumbo Hotfix Accumulator Take 310 and higher:

      1. Connect to the command line on the operational Orchestrator.

      2. Log in to Gaia Clish.

      3. Export the configuration files of the failed Orchestrator:

        set maestro export remote orchestrator id <ID of the Failed Orchestrator> configuration archive-name <Name of Output Archive File> path <Full Path on Working Orchestrator>

        Example:

        set maestro export remote orchestrator id 1_2 configuration archive-name Export_from_failed_Orch_1_2 path /var/log/

    • If the Orchestrators run R80.20SP with the Jumbo Hotfix Accumulator Take 309 and lower:

      Note - Contact Check Point Support for further instructions, if the failed Orchestrator is not accessible, or if its configuration is empty (for example, after restoring to factory default).

      1. Connect to the command line on the failed Orchestrator.

      2. Log in to the Expert mode.

      3. Export the configuration files of the failed Orchestrator:

        tar -czf <Full Path on Failed Orchestrator>/<Name of Output Archive File>.tgz -C /etc maestro.json sgdb.json smodb.json maestro_full.json

        Example:

        tar -czf /var/log/Export_from_failed_Orch_1_2.tgz -C /etc maestro.json sgdb.json smodb.json maestro_full.json

  2. Transfer the archive with the exported configuration files from the Orchestrator to your computer.

    Use an SCP client (WinSCP requires that the default user shell is /bin/bash).

  3. Stop the Orchestrator service on the failed Orchestrator:

    1. Connect to the command line on the failed Orchestrator.

    2. Log in to the Expert mode.

    3. Stop the service:

      orchd stop

  4. Write down the port numbers, to which the Uplink cable, the Downlink cables, and the Sync cables are connected on the failed Orchestrator.

  5. Disconnect the power cables from the failed Orchestrator.

    Important - Do not halt (shut down) the failed Orchestrator. It might reboot instead, and the Orchestrators service will start again.

  6. Connect only these cables to the replacement Orchestrator:

    1. The network cable from your computer to the Orchestrator's MGMT interface.

    2. The console cable from your computer to the Orchestrator's Console Port.

      In your console client, configure these settings:

      • Baud Rate - 115200
      • Data bits - 8
      • Stop bits - 1
      • Parity - None
      • Flow Control - None
    3. The power cables to the Orchestrator's Power Supply Units.

    See the Quantum Maestro Getting Started Guide.

  7. In the console session, log in to Gaia Clish.

  8. Configure the Orchestrator's MGMT interface:

    1. Configure the IPv4 address and Mask Length:

      set interface Mgmt1 ipv4-address <IPv4 Address of MGMT Interface> mask-length <Mask>

    2. Enable the interface:

      set interface Mgmt1 state on

    3. Configure the default gateway:

      set static-route default nexthop gateway address <IPv4 Address of Default Gateway> on

    4. Save the changes:

      save config

    See the R80.20SP Maestro Gaia Administration Guide.

  9. Connect to the command line the replacement Orchestrator through the MGMT interface (over SSH).

  10. Log in to Gaia Clish.

  11. Configure these Gaia settings:

    1. Configure the same date and time settings as configured on all other working Orchestrators (in our example, 1_1).

      Note - The date and time settings must be the same on all Orchestrators.

    2. Configure the hostname.

    3. Change the default password for the 'admin' user.

    4. Save the changes with the "save config" command.

    See the R80.20SP Maestro Gaia Administration Guide.

  12. On the replacement Orchestrator, install the same Take of the R80.20SP Jumbo Hotfix Accumulator as installed on the working Orchestrator (in our example, 1_1).

    Wait for the Orchestrator to reboot.

  13. On the replacement Orchestrator, stop these services:

    1. Connect to the command line the replacement Orchestrator through the MGMT interface (over SSH).

    2. Log in to the Expert mode.

    3. Stop the Orchestrator service:

      orchd stop

    4. Stop the LLDP service:

      tellpm process:lldpd

  14. Transfer the archive with the exported configuration files from your computer to the replacement Orchestrator to some directory (for example, /var/log/).

    Use an SCP client (WinSCP requires that the default user shell is /bin/bash).

  15. On the replacement Orchestrator, import the configuration files you collected earlier:

    • If the Orchestrator runs R80.20SP with the Jumbo Hotfix Accumulator Take 310 and higher:

      1. Connect to the command line on the replacement Orchestrator.

      2. Log in to Gaia Clish.

      3. Import the configuration files:

        set maestro import configuration archive-name <Name of Output Archive File> path <Full Local Path>

        Example:

        set maestro import configuration archive-name Export_from_failed_Orch_1_2.tgz path /var/log/

    • If the Orchestrator runs R80.20SP with the Jumbo Hotfix Accumulator Take 309 and lower:

      1. Connect to the command line on the replacement Orchestrator.

      2. Log in to the Expert mode.

      3. Unpack the configuration files to the /etc/ directory:

        tar -xzf <Full Local Path>/<Name of Output Archive File>.tgz -C /etc

        Example:

        tar -xzf /var/log/Export_from_failed_Orch_1_2.tgz -C /etc

  16. Make sure the configuration files are the same on the two Orchestrators.

    1. Connect to the command line on each Orchestrator.

    2. Log in to the Expert mode.

    3. Run:

      jsont -f /etc/sgdb.json -P | egrep -v "topology_timestamp|session_id|active_sgms|all_sgm" | md5sum

      The MD5 must be the same on the two Orchestrators.

      If the MD5 are different:

      1. Transfer the /etc/sgdb.json file from the operational Orchestrator (1_1) to the replacement Orchestrator (1_2) to the /etc/ directory.
      2. Check the MD5 again.
  17. Connect these cables to the replacement Orchestrator to ports with the same numbers as they were connected on the failed Orchestrator:

    • The Downlink cables
    • The Sync cable(s)

    Make sure you connected the cables to the correct ports.

  18. On the replacement Orchestrator, start the Orchestrator service:

    1. Connect to the command line on each Orchestrator.

    2. Log in to the Expert mode.

    3. Disable the Link State Propagation (LSP):

      jsont -f /etc/smodb.json -s /orch_lsp_state -v off

    4. Start the Orchestrator service:

      orchd start

    5. Enable the Link State Propagation (LSP):

      jsont -f /etc/smodb.json -s /orch_lsp_state -v on

    6. Restart the Orchestrator service (this also starts the LLDP service):

      orchd restart

  19. Make sure Orchestrators can pass traffic to each other:

    • In a Single Site deployment:

      On the operational Orchestrator (in our example, 1_1), send ping to the replacement Orchestrator (in our example, 1_2):

      ping 1_2

    • In a Dual Site deployment:

      On the replacement Orchestrator (in our example, 1_2), send ping to each operational Orchestrator:

      ping 1_1

      ping 2_1

      ping 2_2

  20. Make sure the Security Group Members can pass traffic to each other:

    1. Connect to the command line on the Security Group.

    2. Log in to the Expert mode.

    3. Identify the Security Group Member that runs as SMO:

      asg stat -i tasks

    4. Examine the cluster state of the Security Group Members.

      On the SMO Security Group Member, run:

      cphaprob state

      The output must show that all Security Group Members are active.

    5. Send ping between Security Group Members:

      1. Connect to one of the Security Group Members
        (in our example, we connect to the first one - 1_1):

        member 1_1

      2. On this Security Group Member, send ping to any other Security Group Member
        (in our example, we send ping to the second one - 1_2 / 2_2):

        • In a Single Site deployment:

          ping 1_2

        • In a Dual Site deployment:

          ping 1_2

          ping 2_2

  21. Connect the Uplink cables to the replacement Orchestrator to ports with the same numbers as they were connected on the failed Orchestrator.

  22. On each Security Group, make sure all links are up on the Security Group:

    1. Connect to the command line on the Security Group.

    2. Examine the state of links:

      asg_if

 

Procedure for Orchestrators that run the version R81.10 or higher

Show / Hide this section
  1. Export the configuration files of the failed Orchestrator:

    1. Connect to the command line on the operational Orchestrator.

    2. Log in to Gaia Clish.

    3. Export the configuration files of the failed Orchestrator:

      set maestro export remote orchestrator id <ID of the Failed Orchestrator> configuration archive-name <Name of Output Archive File> path <Full Path on Working Orchestrator>

      Example:

      set maestro export remote orchestrator id 1_2 configuration archive-name Export_from_failed_Orch_1_2 path /var/log/

  2. Transfer the archive with the exported configuration files from the Orchestrator to your computer.

    Use an SCP client (WinSCP requires that the default user shell is /bin/bash).

  3. Stop the Orchestrator service on the failed Orchestrator:

    1. Connect to the command line on the failed Orchestrator.

    2. Log in to the Expert mode.

    3. Stop the service:

      orchd stop

  4. Write down the port numbers, to which the Uplink cable, the Downlink cables, and the Sync cables are connected on the failed Orchestrator.

  5. Disconnect the power cables from the failed Orchestrator.

    Important - Do not halt (shut down) the failed Orchestrator. It might reboot instead, and the Orchestrators service will start again.

  6. Connect only these cables to the replacement Orchestrator:

    1. The network cable from your computer to the Orchestrator's MGMT interface.

    2. The console cable from your computer to the Orchestrator's Console Port.

      In your console client, configure these settings:

      • Baud Rate - 115200
      • Data bits - 8
      • Stop bits - 1
      • Parity - None
      • Flow Control - None
    3. The power cables to the Orchestrator's Power Supply Units.

    See the Quantum Maestro Getting Started Guide.

  7. In the console session, log in to Gaia Clish.

    Important Note - Choose not to activate the Orchestrator. If you activated the Orchestrator, then in the Expert mode, run the "orchd stop" command.

  8. Configure the Orchestrator's MGMT interface:

    1. Configure the IPv4 address and Mask Length:

      set interface Mgmt1 ipv4-address <IPv4 Address of MGMT Interface> mask-length <Mask>

    2. Enable the interface:

      set interface Mgmt1 state on

    3. Configure the default gateway:

      set static-route default nexthop gateway address <IPv4 Address of Default Gateway> on

    4. Save the changes:

      save config

    See the R81.10 Gaia Administration Guide.

  9. Connect to the command line the replacement Orchestrator through the MGMT interface (over SSH).

  10. Log in to Gaia Clish.

  11. Configure these Gaia settings:

    1. Configure the same date and time settings as configured on all other working Orchestrators (in our example, 1_1).

      Note - The date and time settings must be the same on all Orchestrators.

    2. Configure the hostname.

    3. Change the default password for the 'admin' user.

    4. Save the changes with the "save config" command.

    See the R81.10 Gaia Administration Guide.

  12. On the replacement Orchestrator, install the same Take of the R81.10 Jumbo Hotfix Accumulator as installed on the working Orchestrator (in our example, 1_1).

    Wait for the Orchestrator to reboot.

  13. Transfer the archive with the exported configuration files from your computer to the replacement Orchestrator to some directory (for example, /var/log/).

    Use an SCP client (WinSCP requires that the default user shell is /bin/bash).

  14. On the replacement Orchestrator, import the configuration files you collected earlier:

    1. Connect to the command line on the replacement Orchestrator.

    2. Log in to Gaia Clish.

    3. Import the configuration files:

      set maestro import configuration archive-name <Name of Output Archive File> path <Full Local Path>

      Example:

      set maestro import configuration archive-name Export_from_failed_Orch_1_2.tgz path /var/log/

  15. Make sure the configuration files are the same on the two Orchestrators.

    1. Connect to the command line on each Orchestrator.

    2. Log in to the Expert mode.

    3. Run:

      jsont -f /etc/sgdb.json -P | egrep -v "topology_timestamp|session_id|active_sgms|all_sgm" | md5sum

      The MD5 must be the same on the two Orchestrators.

      If the MD5 are different:

      1. Transfer the /etc/sgdb.json file from the operational Orchestrator (1_1) to the replacement Orchestrator (1_2) to the /etc/ directory.
      2. Check the MD5 again.
  16. Connect these cables to the replacement Orchestrator to ports with the same numbers as they were connected on the failed Orchestrator:

    • The Downlink cables
    • The Sync cable(s)

    Make sure you connected the cables to the correct ports.

  17. On the replacement Orchestrator, start the Orchestrator service:

    1. Log in to the Expert mode.

    2. Disable the Link State Propagation (LSP):

      jsont -f /etc/smodb.json -s /orch_lsp_state -v off

    3. Check the Orchestrator services status:

      orchd status

    4. If the Orchestrator is not active, then activate it (it will start the Orchestrator services):

      orchd activate

      (Otherwise, start the Orchestrator services: orchd start)

    5. Enable the Link State Propagation (LSP):

      jsont -f /etc/smodb.json -s /orch_lsp_state -v on

    6. Restart the Orchestrator port monitoring daemon with these commands:

      tellpm process:ssm_pmd

      tellpm process:ssm_pmd t

    Note - It may take a few seconds for the Orchestrator to discover the connected Security Appliance for the first time. In the Expert mode, run the "watch -d -n 1 orch_stat" command and wait for all the LAGs to be in the "up" status.

  18. Make sure Orchestrators can pass traffic to each other:

    • In a Single Site deployment:

      On the operational Orchestrator (in our example, 1_1), send ping to the replacement Orchestrator (in our example, 1_2):

      ping 1_2

    • In a Dual Site deployment:

      On the replacement Orchestrator (in our example, 1_2), send ping to each operational Orchestrator:

      ping 1_1

      ping 2_1

      ping 2_2

  19. Make sure the Security Group Members can pass traffic to each other:

    1. Connect to the command line on the Security Group.

    2. Log in to the Expert mode.

    3. Identify the Security Group Member that runs as SMO:

      asg stat -i tasks

    4. Examine the cluster state of the Security Group Members.

      On the SMO Security Group Member, run:

      cphaprob state

      The output must show that all Security Group Members are active.

    5. Send ping between Security Group Members:

      1. Connect to one of the Security Group Members
        (in our example, we connect to the first one - 1_1):

        member 1_1

      2. On this Security Group Member, send ping to any other Security Group Member
        (in our example, we send ping to the second one - 1_2 / 2_2):

        • In a Single Site deployment:

          ping 1_2

        • In a Dual Site deployment:

          ping 1_2

          ping 2_2

  20. Connect the Uplink cables to the replacement Orchestrator to ports with the same numbers as they were connected on the failed Orchestrator.

  21. On each Security Group, make sure all links are up on the Security Group:

    1. Connect to the command line on the Security Group.

    2. Examine the state of links:

      asg_if

 

Revision History

Show / Hide this section
Date Change
10 Aug 2021 Added the procedure for Orchestrators that run the R81.10 version
28 June 2021 Created this article for Orchestrators that run the R80.20SP version

Give us Feedback
Please rate this document
[1=Worst,5=Best]
Comment