Вы находитесь на странице: 1из 238

F5 BIG-IP v11 LTM & GTM

Installation, Administration &


Configuration
Overview

The Evolution to Application Delivery Controllers (a.k.a Load


Balancers)

Primary drivers can be summarized as scalability, high


availability and predictability.

In the Beginning, There Was DNS:-


Overview

Load Balancing in Software:-


Overview

Network-Based Load balancing Hardware:-


Overview
Architecture Overview
Architecture Overview
Architecture Overview
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms

Datasheet
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms
BIG-IP Hardware
Platforms
BIG-IP Virtual Editions
Local Traffic Manager
(LTM)
Load
Balancing
Load Balancing Methods
Load Balancing Methods - STATIC
Review Quiz
Load Balancing Methods - DYNAMIC
Review Quiz
Load Balancing Methods - DYNAMIC
Review Quiz
Review Quiz
Review Quiz
Priority Group Activation
Review Quiz
Monitor
s
Monitor Configuration & Review
Monitor Configuration & Review
Monitor Configuration & Review
Monitor Configuration & Review
Monitor Configuration & Review
Monitor Configuration & Review
Review Quiz
Review Quiz
Monitor Assignment
Monitor Assignment

Creating custom monitors is an


important process, but unless the
monitor is assigned to something
- a node, a pool member or a
pool - the monitor will not
perform any tests
Review Quiz
Monitor Status Reporting
Review Quiz
Review Quiz
Other Monitor options
Profiles
Profiles Overview
Profiles Overview
Review Quiz
Review Quiz
Profiles Overview
Profiles Overview
Profiles Overview
Profiles Overview
HTTP profile options
OneConnect
OneConnect
HTTP Compression

Compression Concepts
HTTP Compression

Compression Concepts
HTTP Compression
Persiste
nce
Persistence Concepts
Persistence Concepts
Review Quiz
Persistence Revisited
Review Quiz
Persistence Revisited
Persistence Revisited
Persistence Revisited
Persistence Revisited
Persistence Revisited
Persistence Revisited
Review Quiz
Processing SSL
Traffic
Take a moment to think about this dilemma. How might BIG-IP enforce
persistence in this situation - HTTPS traffic coming through a NAT
device? What do you think is the solution?
Exploring SSL on BIG-IP
Exploring SSL on BIG-IP
Review Quiz
Review Quiz
Configuring BIG-IP for SSL traffic
Review Quiz
NAT & SNAT
NAT concepts and configuration
Review Quiz
SNAT concepts
SNAT concepts
Review Quiz
Review Quiz
SNATs Revisited
SNATs Revisited
SNAT Auto Map
iRule
s
iRules Concepts
Exploring iRules concept
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
iRules Concepts
Review Quiz
Review Quiz
Review Quiz
iRules Revisited
iRules Revisited
iRules Revisited

Analyse an example iRule

Examine this iRule and then consider the questions that follow:

rule BrowserType {
when HTTP_REQUEST {
if { [HTTP::header User-Agent] contains "MSIE"} {
pool IE_Pool }
elseif { [HTTP::header User-Agent] contains "Mozilla" } {
pool Mz_Pool }
}
}

1. What do you think this iRule will do?


2. Why might you apply such an iRule?
3. Do you see any potential problems with this iRule?
iRules Revisited
iRules Revisited

Do you think it is important to write iRules as efficiently as


possible?

1. Yes, because BIG-IP LTM processes an iRule each time its


declared Event occurs, it is important to keep them as small and
efficient as possible.

2. No, with the computing power available in BIG-IP, the amount


of time it takes to process an iRule is negligible
iRules Revisited
iRules Revisited
iRules Revisited
Review Quiz
iApps
iApps Concepts

iApps is the BIG-IP framework for deploying services-based, template-


driven configurations and maintenance of applications.

iApps, introduced in BIG-IP version 11, allows the creation of application


centric configuration interfaces on BIG-IP, reducing configuration time and
increasing accuracy of complex traffic management configurations.

Benefits of iApps:
- Configuration encapsulation
- Simplified deployment and on-going configuration management
- Operational tasks and health status for App objects displayed in App-
specific view
- Community support for DevCentral hosted templates
iApps Concepts

The iApps framework consists of two main components:

-Application Services
iApps application services use templates to guide users through configuring
new BIG-IP system configurations

-Templates
iApps templates create configuration-specific forms used by application
services to guide authorized users through complex system configurations
Application Services
Application Services
Templates
Templates
High
Availability
Sync-Failover Group Concepts
Synchronization, State and Failover

-Synchronizing the Configuration


Many of the parameters for each of the systems must be configured
identically
Traffic Group Concepts

-Traffic Group
Is a collection of related configuration objects that run on a BIG-IP
device.
Together these objects process a particular type of traffic on that device
When a BIG-IP device becomes unavailable, a traffic group floats to
another device is a device group

-Default traffic groups on the system


traffic-group-1, contains the floating self-IP addresses, iApps, virtual IP
addresses, NATs or SNATs
traffic-group-local-only, contains the static self-IP addresses for each
VLAN
Failover Triggers
Failover Triggers and Detection

Failover Triggers

There are many events that are monitored as HA features. They


include:

Specific processes
VLAN functionality
The switch board
Failover Triggers and Detection

BIG-IP processes
Failover Triggers and Detection

VLAN Fail-safe

BIG-IP system supports failure detection for each VLAN

When failsafe is enabled, the BIG-IP system monitors network


traffic going thru that VLAN

Detects no network traffic, BIG-IP tries to generate traffic

Timeout reached, still no traffic detected, Standby becomes


Active
Review Quiz
Failover Detection
Stateful failover
Device Group Communication
Command Line
Interface
Command Line Usage
Command Line Usage

About the Traffic Management Shell

The BIG-IP system includes a tool known as the Traffic Management Shell
(tmsh) that you can use to configure and manage the system from the
command line.

Using tmsh, you can configure system features, and set up network elements.

You can also configure the BIG-IP system to manage local and global traffic
passing through the system, and view statistics and system performance data.
Command Line Usage

Additional command line utilities and tools

The config utility

The bigtop utility

The bigstart command

The Tools Command Language (Tcl) programming language

The OpenSSL utility


Command Line Usage

Basic syntax conventions


Command Line Usage
Command Line Usage

Understanding the structure of tmsh

tmsh is an interactive shell that you use to manage


the BIG-IP system. The structure of tmsh is
hierarchical and modular. The highest level is the
root module, which contains twelve subordinate
modules.
Command Line Usage

Using tmsh

You must provision a BIG-IP module before you can use tmsh to configure it.
The command sequence list sys provision displays the BIG-IP system modules
that can be provisioned.

You can issue a single tmsh command at the BIG-IP system prompt using this
syntax:
tmsh [command] [module...module] [component] (options)

e.g.: tmsh show ltm pool all-properties

You can open tmsh by typing tmsh at the BIG-IP system prompt. This starts
tmsh in interactive shell mode and displays the tmsh prompt:
(tmos)#

tmsh applies all configuration changes that you make from within tmsh to
the running configuration of the system. For tmsh to write the changes to
the stored configuration files, you must save the changes using the save
sys config command sequence.
Command Line Usage

Loading and saving the system configuration

save /sys config


load /sys config

Working within the tmsh hierarchy

To navigate to the ltm module, type: ltm


The ltm module prompt displays: (tmos.ltm)#

To navigate to the ltm pool component, type: ltm pool


The ltm pool component prompt displays: (tmos.ltm.pool)#

To navigate to pool1, type: modify ltm pool pool1


The pool1 object prompt displays: (tmos.ltm.pool.pool1)#
Command Line Usage
Command Line Usage
Command Line Usage

Leaving object mode, component mode, a module, or tmsh


BIG-IP
Administration
Upgrading the BIG-IP system
Administrative Domains
Clustered Multi-Processing
Platform TMM instances
Processors
2
BIG-IP 1600 1 Dual-core
1 Dual-core 2
BIG-IP 3600
BIG-IP 3900 4
1 Quad-core
Clustered Multi-Processing (CMP) BIG-IP 6400*
2 Single-core
2
is a feature that was added in BIG- 2 Single-core 2
BIG-IP 6800*
IP 9.4.0. CMP allows specific 4
BIG-IP 6900 2 Dual-core
platforms with multiple processing
2 Single-core 2
cores to use multiple Traffic BIG-IP 8400
4
Management Microkernel (TMM) BIG-IP 8800 2 Dual-core

instances to increase traffic BIG-IP 8900 2 Quad-core


8

management capacity. BIG-IP 8950


2 Quad-core
8

12
BIG-IP 11050 2 Six-core
VIPRION 2100
1 Quad-core
blade** 8

VIPRION 4100 blade


2 Dual-core 4

VIPRION 4200 blade


2 Quad-core 8

VIPRION 4300 blade 2 Six-core 12


Clustered Multi-Processing

Virtual Server: 172.16.10.10:80

Pool with 4 members:


10.0.0.1:80
10.0.0.2:80
10.0.0.3:80
10.0.0.4:80

Pool Load Balancing Method: Round Robin


Scenario 1: Virtual server without CMP enabled

Four connections are made to the virtual server. The BIG-IP system load balances
the four individual connections to the four pool members based on the Round
Robin load balancing algorithm:

--Connection 1--> | | --Connection 1--> 10.0.0.1:80


--Connection 2--> | | --Connection 2--> 10.0.0.2:80
--Connection 3--> | | --Connection 3--> 10.0.0.3:80
--Connection 4--> | | --Connection 4--> 10.0.0.4:80
Clustered Multi-Processing

Scenario 2: Virtual server with CMP enabled on a BIG-IP 8800

Four connections are made to the virtual server, unlike the first scenario where
CMP was disabled, the BIG-IP distributes the connections across the multiple
TMM processes. The BIG-IP 8800 with CMP enabled can use four TMM
processes. Since each TMM handles load balancing independently of the other
TMM processes, it is possible that all four connections are directed to the same
pool member.

--Connection 1--> | | --Connection 1--> TMM0 --> 10.0.0.1:80


--Connection 2--> | | --Connection 2--> TMM1 --> 10.0.0.1:80
--Connection 3--> | | --Connection 3--> TMM2 --> 10.0.0.1:80
--Connection 4--> | | --Connection 4--> TMM3 --> 10.0.0.1:80
Logs & Notification
Logs & Notification

System Log Configuration

Determines the type of messages that are captured and where that
information is stored.

Messages are defined in terms of the facility that manages the data and the
level of that particular message.
Logs & Notification

Logging to a Remote Host

BIG-IP systems can send any number of its syslog messages to a remote host
Logs & Notification

Log File Location and Names


Logs & Notification
F5 Support
Global Traffic Manager
(GTM)
GTM Overview
BIG- IP GTM Overview

The GTM system adds intelligence and control to the Internet industry standard domain
name system (DNS) architecture

By assessing the health of data centers and the network, the GTM system can resolve name
queries to servers that are both available and optimal based on criteria that you select
BIG- IP GTM Overview

GTM System benefits & features

Intelligent DNS Resolution


Application Monitoring
Network Monitoring
Policy-based Resolutions

Accelerated DNS Resolution


DNS Server Load Balancing
DNS Express
DNS Cache

Secure DNS Resolution


DNSSEC
DNS Overview
DNS Overview
DNS Overview

The DNS Hierarchy


DNS Overview
DNS Overview
DNS Overview
DNS Overview
DNS Overview
Accelerated DNS
Resolution
GTM and DNS Resolutions
Hierarchy of Options Flow Chart
GTM Listeners
Load Balancing DNS Queries
Intelligent DNS
Resolution
Intelligent DNS Resolutions
Intelligent DNS Resolutions
Intelligent DNS Resolutions
Intelligent DNS Resolutions
Intelligent DNS Resolutions
Intelligent DNS Resolutions
Intelligent DNS Resolutions
Metric Collection
When a DNS resolution request is processed by a GTM system, the response will be
the IP address of the best virtual server. The most important part of "best" is
determining which of the virtual servers are currently working properly; GTM uses its
monitor capability to verify this. In addition, GTM can test path metrics between your
data centers and your customer using LDNS path probes.
Metric Collection
Intelligent Name Resolution
Data Centers
GTM Systems
Adding LTM Systems
GTM/LTM System Communications:
iQuery
GTM/LTM System Communications:
iQuery
GTM/LTM System Communications:
iQuery
Adding Non-F5 Servers
Links
Wide IPs & Wide IP Pools
Wide IPs & Wide IP Pools
LDNS Probes and Metrics
Metric Overview
LDNS Probe Configuration
LDNS Probes and Metrics
LDNS Probes and Metrics
Load Balancing in GTM
Static Load Balancing Modes
Static Load Balancing Modes
Static Load Balancing Modes
Dynamic Load Balancing Modes
Dynamic Load Balancing Modes
Dynamic Load Balancing Modes
Dynamic Load Balancing Modes
Topology Load Balancing
Topology Load Balancing
Topology Load Balancing
Monitors in GTM
Monitors Overview
Monitors
Monitor Types
Monitor Types
Monitor Types
Monitor Types
Monitor Types
Monitor Configuration
Additional Topics
DNSSEC
DNSSEC
DNSSEC
Synchronization
Thank You