HA versus LB Configuration of F5

F5 can be set up on GCP using a High Availability ( HA ) Configuration as well as a Load Balancing Configuration (using GCP's Managed Instance Groups).

  • Standalone
    This Terraform plan uses the Google provider to build the necessary Google objects and a standalone BIG-IP device with 3-NICs. Traffic flows from client to F5 to backend app servers.
  • Autoscale via LB
    This Terraform plan deploys BIG-IP devices with 3-NICs in a Google Managed Instance Group (MIG). Each device is standalone, each device retrieves its onboarding from custom-data, and each device is treated as immutable. Network/Application changes are made to Terraform TF files (or DO and AS3 json files), and the Google MIG will perform rolling upgrades of each BIG-IP as a result of modified custom-data.
  • HA via API
    This Terraform plan uses the Google provider to build the necessary Google objects and a pair of BIG-IP devices with 3-NICs. The F5 Cloud Failover Extension (CFE) will call the Google REST API and move cloud objects (ex. IPs, routes) during failover when the BIG-IP detects a problem with its peer. Traffic flows from client to F5 to backend app servers.
  • HA via LB
    This Terraform plan uses the Google provider to build the necessary Google objects and a pair of BIG-IP devices with 3-NICs. The Google LB is used to distribute traffic to the F5 BIG-IP devices for high availability and failover. Traffic flows from client to GLB/ILB to F5 to backend app servers.

Some scoping questions:

  1. Can you use GCP Auto Scaling with F5? What about Target Groups?
  2. Can the target groups be Cloud Functions?
  3. What about DNS? How is that handled?
  4. How many Network Interfaces Needed for F5 Appliances ?
  5. How many IP Addresses are needed for F5?

Attaching multiple network interfaces to an instance is useful when you want to:

  • Create a management network.
  • Use network and security appliances in your VPC.
  • Create dual-homed instances with workloads/roles on distinct subnets.

How do I do this through Terraform? 

Actual Deployment (on AWS) - Create 3 Network Interfaces (the public interfaces are prompted during AMI based creation).

Deploy the BIG-IP VE instance

From the AWS Marketplace, choose an F5 BIG-IP VE image. Ensure you add an extra, external NIC.

  • Management interface: eth0 10.0.0.200
  • External interface: eth1 10.0.1.200
2 Create an internal network interface

You created NICs for the management and external subnets when you deployed the instance. You must create an internal NIC and reboot, so BIG-IP VE can recognize the new NIC.

  • Internal interface: eth2 10.0.2.200

Appendix - F5 BigIP Configuration Utility

  1. Open a web browser and log in to the BIG-IP Configuration utility by using https with the external IP address, for example: https://<external-ip-address>. The username is admin and the password is the one you set previously.

  2. On the Setup Utility Welcome page, click Next.

  3. On the General Properties page, click Activate.

  4. In the Base Registration key field, enter the case-sensitive registration key from F5.

    For Activation Method, if you have a production or Eval license, choose Automatic and click Next.