Академический Документы
Профессиональный Документы
Культура Документы
January 3, 2008
In This Document
Introduction Supported Hardware and Operating System Setting Up CoreXL Adding Processing Cores to the Hardware CoreXL Configuration Command Line Reference page 2 page 2 page 2 page 4 page 6 page 8
Copyright 2007 Check Point Software Technologies, Ltd. All rights reserved
Introduction
Introduction
CoreXL is a performance-enhancing technology for VPN-1 gateways on multi-core processing platforms. CoreXL enhances VPN-1 performance by enabling the processing cores to concurrently perform multiple tasks. CoreXL provides almost linear scalability of performance, according to the number of processing cores on a single machine. The increase in performance is achieved without requiring any changes to management or to network topology. CoreXL joins ClusterXL Load Sharing and SecureXL (Performance Pack) as part of Check Points fully complementary family of traffic acceleration technologies. In a CoreXL gateway, the firewall kernel is replicated multiple times. Each replicated copy, or instance, of the firewall kernel runs on one processing core. The instances handle traffic concurrently, and each instance is a complete and independent VPN-1 inspection kernel. CoreXL is based on the NGX R65 version of VPN-1. As far as network topology, management configuration, and security policies are concerned, a CoreXL gateway functions as a regular VPN-1 NGX R65 gateway. All of the kernel instances of a gateway handle traffic going through the same gateway interfaces and apply the same gateway security policy. This document provides basic information for deploying VPN-1 with CoreXL. Optional advanced configuration is discussed in the CoreXL Advanced Configuration Guide.
Note - Before installing CoreXL, read the latest version of the VPN-1 NGX R65 with CoreXL Release Notes, available at: http://www.checkpoint.com/support/technical/documents/index.html
Setting Up CoreXL
No adjustments to the network are required. For management, use only the NGX R65 version of SmartCenter or of Provider-1. No changes to security policies is necessary. You can use the same policies for regular NGX R65 and for CoreXL gateways, as long as they do not use unsupported features (see the VPN-1 NGX R65 with CoreXL Release Notes).
CoreXL Administration Guide. Last Update: January 3, 2008 2
Setting Up CoreXL
To install CoreXL:
1. Disable Hyper-Threading (if supported on your hardware platform) in the BIOS. 2. From a VPN-1 NGX R65 with CoreXL installation source (CD or network file server), install VPN-1 on the gateway. Perform the installation according to the instructions for a new installation in the Internet Security Product Suite Getting Started Guide, through system configuration and reboot. Install in a Distributed deployment only. Standalone deployment is not supported.
Note - During the VPN-1 installation process, you can select to install Performance Pack. CoreXL and Performance Pack increase performance using different technologies, and can function together in a complementary fashion. If you install Performance Pack and later decide not to run it, you will be able to then disable it. For considerations and instructions regarding running Performance Pack on a CoreXL gateway, see Running Performance Pack with CoreXL on page 6.
3. To enhance performance, configure the hardware so that interfaces handling heavy traffic do not share interrupt requests (IRQs). To view the IRQs of all interfaces, run:
fw ctl affinity -l -v -a
If multiple interfaces share an IRQ, try to make sure only one interface handles heavy traffic. 4. To enhance performance, if your CoreXL gateway is not handling VPN traffic, disable VPN on the gateway, as follows: a. In SmartDashboard, double-click the CoreXL gateway icon to open the gateways General Properties. b. Under Check Point Products, clear VPN. c. Install Policy. 5. If your platform contains only two processing cores, the default configuration does not provide optimal performance. In this case, optimize your configuration as follows: a. On the gateway, run:
d. Configure Interface Affinity to achieve optimal performance according to one of the following two options:
Option 1 Running with Performance Pack automatic Affinity configuration 1. Verify that automatic mode is enabled by running the command:
sim affinity -a
2. Edit $ FWDIR/scripts/fwaffinity_used_cpus 3. Add the following as the first line of the file:
exit
There is no need to reboot.
Option 2 Running with manual Affinity configuration Define which core should handle the traffic of each interface (the Interface Affinity). If you have multiple interfaces, you need to decide which interfaces to associate with each of the two cores. Try to achieve a balance of expected traffic between the cores. You can check the resulting balance of traffic by using the top command. To set interface affinities, please refer to the CoreXL Advanced Configuration Guide.
In general, reinstalling CoreXL will change the number of kernel instances if you have upgraded the hardware to an increased number of processing cores, or if the number of processing cores stays the same but the number of kernel instances was previously manually changed from the default. In a clustered deployment, changing the number of kernel instances (such as by reinstalling CoreXL) should be treated as a version upgrade. Follow the instructions in the NGX R65 Upgrade guide, in the Upgrading ClusterXL Deployments chapter, and perform either a Minimal Effort Upgrade (using network downtime) or a Zero Downtime Upgrade (no downtime, but active connections may be lost), substituting the instance number change for the version upgrade in the procedure. A Full Connectivity Upgrade cannot be performed in a CoreXL cluster.
CoreXL Configuration
CoreXL Configuration
This section contains information on the default configuration and on basic configuration options. See the CoreXL Advanced Configuration Guide for information on changing the allocation of processing cores for different tasks.
In This Section
Running Performance Pack with CoreXL Default Configuration Viewing the Existing Configuration page 6 page 6 page 8
Default Configuration
Four or more cores
When running CoreXL on four or more processing cores, the number of kernel instances in the CoreXL post-setup configuration is one less than the number of processing cores. The remaining processing core is responsible for processing incoming traffic from the network interfaces, securely accelerating authorized packets (if Performance Pack is running) and distributing non-accelerated packets among kernel instances. Upon installation of CoreXL, the number of kernel instances is set to n-1, where n is the total number of processing cores on the platform. The instances are numbered from 0 to n-2. CoreXL is designed for a maximum of eight processing cores. If your platform has more than that, the number of kernel instances will still be set to only seven.
Default Configuration
Two cores
When running CoreXL on two processing cores, the number of kernel instances in the CoreXL post-setup configuration will be set to two and will be numbered 0 and 1 . The incoming traffic will be processed on the same core(s) which are assigned for the instances according to the Interface Affinity configured during setup (see step 5d of Setting Up CoreXL).
sim affinity -l
fw ctl affinity
The fw ctl affinity command controls affinity settings. To set affinities: fw ctl affinity -s To list existing affinities: fw ctl affinity -l
fw ctl affinity
fw ctl affinity -s
Use this command to set affinities. For an explanation of kernel, daemon and interface affinities, see the CoreXL Advanced Configuration Guide.
fw ctl affinity -s settings are not persistent through a restart of VPN-1. If you want the settings to be persistent, either use sim affinity (a Performance Pack command see the Performance Pack Administration Guide for details) or edit the fwaffinity.conf configuration file (see the CoreXL Advanced Configuration Guide for details).
To set interface affinities, you should use fw ctl affinity only if Performance Pack is not running. If Performance Pack is running, you should set affinities by using the Performance Pack sim affinity command. These settings will be persistent. If Performance Packs sim affinity is set to Automatic mode (even if Performance Pack was subsequently disabled), you will not be able to set interface affinities by using fw ctl affinity -s.
Syntax fw ctl affinity -s <proc_selection> <cpuid> <proc_selection> is one of the following parameters:
Parameter Description
Sets affinity for a particular process, where <pid> is the process ID#. Sets affinity for a Check Point daemon, where <cpdname> is the Check Point daemon name (for example: fwd). Sets affinity for a kernel instance, where <instance> is the instances number. Sets affinity for an interface, where <interfacename> is the interface name (for example: eth0).
<cpuid> should be a processing core number or a list of processing core numbers. To have no affinity to any specific processing core, <cpuid> should be: all.
Note - Setting an Interface Affinity will set the affinities of all interfaces sharing the same IRQ to the same processing core. To view the IRQs of all interfaces, run: fw ctl affinity -l -v -a .
Example
To set kernel instance #3 to run on processing core #5, run:
fw ctl affinity -s -k 3 5
fw ctl affinity
fw ctl affinity -l
Use this command to list existing affinities. For an explanation of kernel, daemon and interface affinities, see the CoreXL Advanced Configuration Guide.
Syntax fw ctl affinity -l [<proc_selection>] [<listtype>] If <proc_selection> is omitted, fw ctl affinity -l lists affinities of all Check Point daemons, kernel instances and interfaces. Otherwise, <proc_selection> is one of the following parameters:
Parameter Description
Displays the affinity of a particular process, where <pid> is the process ID#. Displays the affinity of a Check Point daemon, where <cpdname> is the Check Point daemon name (for example: fwd). Displays the affinity of a kernel instance, where <instance> is the instances number.
-i <interfacename> Displays the affinity of an interface, where <interfacename> is the interface name (for example: eth0). If <listtype> is omitted, fw ctl affinity -l lists items with specific affinities, and their affinities. Otherwise, <listtype> is one or more of the following parameters:
Parameter Description
-a -r -v Example
All: includes items without specific affinities. Reverse: lists each processing core and the items that have it as their affinity. Verbose: list includes additional information.
To list complete affinity information for all Check Point daemons, kernel instances and interfaces, including items without specific affinities, and with additional information, run:
fw ctl affinity -l -a -v
10
fw -i
Generally, when VPN-1 line commands are executed on a CoreXL gateway they will relate to the gateway as a whole, rather than to an individual kernel instance. For example, the fw tab command will enable viewing or editing of a single table of information aggregated for all kernel instances. You can specify that certain commands apply to an individual kernel instance by adding -i <kern> after fw in the command, where <kern> is the kernel instances number.
fw -i applies to the following commands: fw ctl debug (when used without the -buf parameter) fw ctl get fw ctl set fw ctl leak fw ctl pstat fw monitor fw tab
For details and additional parameters for these commands, see the Command Line Interface Reference Guide.
Example
To view the connections table for kernel instance #1 use the following command:
fw -i 1 tab -t connections
11