Migrating to a Platform Cluster Setup on AWS

Before you begin: Ensure for the following:

  1. You must have access to IAM (administrator) user for your account.
  2. The existing VPC meets the following criteria:

    • Two AZs
    • Two Public Subnets
    • Two Private Subnets
    • Two NAT Gateways

  3. Take the RDS snapshot before starting with the migration steps.
Note: Until the step# 4 of the Migrating to a Platform Cluster to a Platform Cluster section of this guide, you can have your existing environment up and running.

Getting Started with the Migration Process

  1. Download aws.zip from Platform site and copy it to a Controller server (from where all deployment scripts would be initiated)
  2. Unzip the folder to view the following structure.

    --- <aws>
    |-- build (Holds scripts and data-files related to building of AMI)
    |-- Cluster-config (Contains tools to run UI tool to generate your input files)
    |-- deploy  (Top folder bundling all Ansible deploy scripts/playbooks)
    	|--  scripts
    	|-- playbooks
    	|-- inventory
    	|-- config_files (Configuration files used as part of deployment process)
    	|-- scripts (Scripts that are executed as part of Cluster launch )
    	|-- ansible_templates ( CFN templates used for deployment)
    	|-- output      ( Created at runtime – used to store output files of each playbook )
    	|-- migration (Created by user while copying config folder from existing server )
    

  3. To create a new AMI for your deployment see Deploying Platform on AWS.
  4. Move to the deploy folder.
  5. From your existing Platform master, copy the config file contents to the migration sub folder.
  6. Ensure that AWS_SECRET_KEY and AWS_ACCESS_KEY variables are sourced in your current shell.
  7. Set JAVA_HOME variable point to Java1.8 on your controller machine and also include $JAVA_HOME/bin in your PATH variable.
  8. Login to your account and get values for the current VPC properties listed below.
    • VPC ID
    • PublicSubnet1
    • PublicSubnet2
    • PrivateSubnet1
    • PrivateSubnet2
    • AZ1
    • AZ2
  9. Run the Config-UI tool to generate the app-config and topology.json, ensuring that at least the same number of Storage/Search servers that are in the current environment are retained.
  10. Edit the app.config file and add the below properties with the values.

    • RDS_DB_END_POINT: rlb-rds-cust-int-dbo.cobn7nddxtef.us-east-1.rds.amazonaws.com
    • VPC: vpc-29e2844e
    • vpcAZ1: us-east-1c
    • vpcAZ2: us-east-1d
    • vpcPubSubnet1: subnet-e6172dcc
    • vpcPubSubnet2: subnet-b8da30f1
    • vpcPvtSubnet1: subnet-e7172dcd
    • vpcPvtSubnet2: subnet-bbda30f2

Migrating to a Platform Cluster Setup

  1. Run the init.yml script. This playbook would setup necessary files like encryption key file.

    ansible-playbook -i ../inventory/host init.yml

  2. Run update-vpc.yml

    ansible-playbook -i ../inventory/host update-vpc.yml

    This would re-use existing Public/Private Subnets and VPC that were provided as part of the input and create the following resources:

    • IAM Roles
    • SQS-SNS
    • EFS Volumes
    • Security Groups
    • S3 Buckets for storing configuration files
    • Upload all required configuration files to S3

  3. Run create-asg.yml

    ansible-playbook -i ../inventory/host create-asg.yml

    This playbook would create AutoScaling groups for Nginx/OPS and for each of the Role that is defined in your topology.json

    All of the AutoScaling Groups would include:

    • Launch Configurations with userdata
    • LifeCycle hooks
    • Notification hooks with SNS sending messages to SQS queue

  4. Run create.hooks.yml

    ansible-playbook -i ../inventory/host create-hooks.yml

    This playbook creates LifeCycle hooks that would be attached to the corresponding AutoScaling group.

    Note: Ensure that at this point, your current application servers are completely shutdown and there are no existing connections to the RDS.

  5. Run launch-migrate-node.yml

    ansible-playbook -i ../inventory/host launch-migrate-node.yml

    To complete the migration, copy the existing Storage/Search data to the new cluster. This playbook when executed, will launch an OPS node having /efs_data (EFS volume) mounted.

    Also, the instance would have all ports open for the Private IP range (within VPC), which means that you would be able to connect to this instance over scp from your existing Search/Storage machine which is in the same VPC. If the storage is not huge, then do a ‘simple’ scp from the old (existing) Storage/Search machine and copy over the contents to the new machine (recommended to zip, copy and then unzip)

    For example: scp -r -i /tmp/my-keypair Files ec2-user@10.230.13.23:/efs_data/storage1

    However if the data is huge, then consider migrating the data ahead of time than actual application migration date.

    Note:

    Storage data needs to be copied to /efs_data/<storage_component_name>/ and Search data to be copied to /efs_data/<search_component_name>.

    So, if in your topology, there are more than 1 Storage/Search components, copy the data folders for each component to its respective folder under /efs_data .

  6. Ensure that the RDS is using DBSG created by this process and also update_5.0.0.0.sql has been executed against all RDS used by your environment.
  7. Launch the new cluster with the command ansible-playbook -i ../inventory/host launch-cluster.yml

    This playbook updates the ASG parameters – Min/Max/Desired number of instances to match the values set in your topology.json and waits until the application endpoint is reachable. At the end of this command execution, your environment becomes accessible.